GIT 8176a38a167977c9a3c28e82d383ba1b75a8d882 git+ssh://master.kernel.org/pub/scm/linux/kernel/git/davem/net-2.6.22.git commit 8176a38a167977c9a3c28e82d383ba1b75a8d882 Author: Johannes Berg Date: Mon Apr 23 12:20:55 2007 -0700 [WIRELESS]: Remove wext over netlink. As scheduled, this patch removes the pointless wext over netlink code. Signed-off-by: Johannes Berg Signed-off-by: John W. Linville Signed-off-by: David S. Miller commit 98d39e5a104ec204fb572b072d64d2bc2fadb2f2 Author: Johannes Berg Date: Mon Apr 23 12:20:05 2007 -0700 [WIRELESS] cfg80211: New wireless config infrastructure. This patch creates the core cfg80211 code along with some sysfs bits. This is a stripped down version to allow mac80211 to function, but doesn't include any configuration yet except for creating and removing virtual interfaces. This patch includes the nl80211 header file but it only contains the interface types which the cfg80211 interface for creating virtual interfaces relies on. Signed-off-by: Johannes Berg Signed-off-by: John W. Linville Signed-off-by: David S. Miller commit 78c7f576062a789c1fc39f953f46db7d0023fb83 Author: Johannes Berg Date: Mon Apr 23 12:19:12 2007 -0700 [WIRELESS]: Refactor wireless Kconfig. This patch refactors the wireless Kconfig all over and already introduces net/wireless/Kconfig with just the WEXT bit for now, the cfg80211 patch will add to that as well. Signed-off-by: Johannes Berg Signed-off-by: John W. Linville Signed-off-by: David S. Miller commit 994ab811ba3f9b5e796f4c1b558c1d8eb50e9a08 Author: Johannes Berg Date: Mon Apr 23 12:18:20 2007 -0700 [WIRELESS]: Update MAINTAINERS for wireless mailing list. This patch adds the linux-wireless mailing list to all appropriate entries in the MAINTAINERS file. Signed-off-by: Johannes Berg Signed-off-by: John W. Linville Signed-off-by: David S. Miller commit a807ff6f8a1bbaba1ae2124579b5cc431fdf2109 Author: Andrew Morton Date: Sun Apr 22 23:22:24 2007 -0700 [NET]: Prevent much sadness in qdisc_lock_tree(). Signed-off-by: Andrew Morton Signed-off-by: David S. Miller commit 4e8471ea526f9a87142663bb2a67f3845e400a37 Author: Samuel Ortiz Date: Sun Apr 22 12:59:20 2007 -0700 [IRDA]: Revert monitor mode change. This reverts commit 8eb943bbb8d2b59a5dccfed691da2e3797ef7e8e. Let's not introduce yet another ioctl. The IrDA monitoring feature will be implemented through the addition of an IrDA netlink layer. Signed-off-by: Samuel Ortiz Signed-off-by: David S. Miller commit cbfb32758f08db605efa33192664f613eea20083 Author: YOSHIFUJI Hideaki Date: Sat Apr 21 19:52:04 2007 -0700 [IPV6] SNMP: Use put_unaligned() instead of memcpy(). Hint from David Miller . Signed-off-by: YOSHIFUJI Hideaki Signed-off-by: David S. Miller commit e889b8bb6583187159b6856a984132322bcafc32 Author: YOSHIFUJI Hideaki Date: Sat Apr 21 20:13:44 2007 +0900 [IPV6] SNMP: Fix several warnings without procfs. Signed-off-by: YOSHIFUJI Hideaki commit 1e153ad3d13c46c39cd1774535e1778fd2a7f63c Author: YOSHIFUJI Hideaki Date: Sat Apr 21 20:12:43 2007 +0900 [IPV6] SNMP: Avoid unaligned accesses. Because stats pointer may not be aligned for u64, use memcpy to fill u64 values. Issue reported by David Miller . Signed-off-by: YOSHIFUJI Hideaki commit 8814b503d0d9ccbd5896e2ce43f2a34161a92023 Author: Ilpo Järvinen Date: Fri Apr 20 22:18:02 2007 -0700 [TCP]: Sed magic converts func(sk, tp, ...) -> func(sk, ...) This is (mostly) automated change using magic: sed -e '/struct sock \*sk/ N' -e '/struct sock \*sk/ N' -e '/struct sock \*sk/ N' -e '/struct sock \*sk/ N' -e 's|struct sock \*sk,[\n\t ]*struct tcp_sock \*tp\([^{]*\n{\n\)| struct sock \*sk\1\tstruct tcp_sock *tp = tcp_sk(sk);\n|g' -e 's|struct sock \*sk, struct tcp_sock \*tp| struct sock \*sk|g' -e 's|sk, tp\([^-]\)|sk\1|g' Fixed four unused variable (tp) warnings that were introduced. In addition, manually added newlines after local variables and tweaked function arguments positioning. $ gcc --version gcc (GCC) 4.1.1 20060525 (Red Hat 4.1.1-1) ... $ codiff -fV built-in.o.old built-in.o.new net/ipv4/route.c: rt_cache_flush | +14 1 function changed, 14 bytes added net/ipv4/tcp.c: tcp_setsockopt | -5 tcp_sendpage | -25 tcp_sendmsg | -16 3 functions changed, 46 bytes removed net/ipv4/tcp_input.c: tcp_try_undo_recovery | +3 tcp_try_undo_dsack | +2 tcp_mark_head_lost | -12 tcp_ack | -15 tcp_event_data_recv | -32 tcp_rcv_state_process | -10 tcp_rcv_established | +1 7 functions changed, 6 bytes added, 69 bytes removed, diff: -63 net/ipv4/tcp_output.c: update_send_head | -9 tcp_transmit_skb | +19 tcp_cwnd_validate | +1 tcp_write_wakeup | -17 __tcp_push_pending_frames | -25 tcp_push_one | -8 tcp_send_fin | -4 7 functions changed, 20 bytes added, 63 bytes removed, diff: -43 built-in.o.new: 18 functions changed, 40 bytes added, 178 bytes removed, diff: -138 Signed-off-by: Ilpo Järvinen Signed-off-by: David S. Miller commit e643497bfb1795751307eae8ce20a8f16d2ec283 Author: Borislav Petkov Date: Fri Apr 20 22:14:10 2007 -0700 [NET]: Fix comments for register_netdev(). Correct the function name in the comments supplied with register_netdev() Signed-off-by: Borislav Petkov Signed-off-by: David S. Miller commit 613fcbcbc4d739f6bebbb8f7dd38fafeeeab3156 Author: G. Liakhovetski Date: Fri Apr 20 22:12:48 2007 -0700 [IrDA]: Misc spelling corrections. Spelling corrections, from "to" to "too". Signed-off-by: G. Liakhovetski Signed-off-by: Samuel Ortiz Signed-off-by: David S. Miller commit 0c4c9f8e01c1dfa82ac79ff05813834dd2fa087e Author: Samuel Ortiz Date: Fri Apr 20 22:12:07 2007 -0700 [IrDA]: Adding carriage returns to mcs7780 debug statements Signed-off-by: Samuel Ortiz Signed-off-by: David S. Miller commit 8eb943bbb8d2b59a5dccfed691da2e3797ef7e8e Author: Samuel Ortiz Date: Fri Apr 20 22:11:05 2007 -0700 [IrDA]: IrDA monitor mode Through a protocol specific ioctl, one can disable IrDA TX in order to monitor an IrDA link. Signed-off-by: Samuel Ortiz Signed-off-by: David S. Miller commit d2cbf50687b21fe1a367a0ee00aa0a19e303b834 Author: Samuel Ortiz Date: Fri Apr 20 22:10:13 2007 -0700 [IrDA] af_irda: IRDA_ASSERT cleanups In af_irda.c, the multiple IRDA_ASSERT() are either hiding bugs, useless, or returning the wrong value. Let's clean that up. Signed-off-by: Samuel Ortiz Signed-off-by: David S. Miller commit fc980cf9fea6de3038a8cb5c91a4a9acde842c2d Author: Samuel Ortiz Date: Fri Apr 20 22:09:33 2007 -0700 [IrDA] af_irda: irda_accept cleanup This patch removes a cut'n'paste copy of wait_event_interruptible from irda_accept. Signed-off-by: Samuel Ortiz Acked-by: Olaf Kirch Signed-off-by: David S. Miller commit 04aac62ddaef1a3b45b81ba725d451472c45e8af Author: Olaf Kirch Date: Fri Apr 20 22:08:15 2007 -0700 [IrDA] af_irda: Silence kernel message in irda_recvmsg_stream This patch silences an IRDA_ASSERT in irda_recvmsg_stream, as described in http://bugzilla.kernel.org/show_bug.cgi?id=7512 irda_disconnect_indication would set sk->sk_err to ECONNRESET, and a subsequent call to recvmsg would print an irritating kernel message and return -1. When a connected socket is closed by the peer, recvmsg should return 0 rather than an error. This patch fixes this. Signed-off-by: Olaf Kirch Signed-off-by: Samuel Ortiz Signed-off-by: David S. Miller commit cd10328a52c0ef432a00ada8284b28160feb2827 Author: Olaf Kirch Date: Fri Apr 20 22:05:27 2007 -0700 [IrDA] af_irda: irda_recvmsg_stream cleanup This patch cleans up some code in irda_recvmsg_stream, replacing some homebrew code with prepare_to_wait/finish_wait, and by making the code honor sock_rcvtimeo. Signed-off-by: Olaf Kirch Signed-off-by: Samuel Ortiz Signed-off-by: David S. Miller commit a29f6ab5dba926ce259bf39f7a1774d7cb7c0dc4 Author: Andi Kleen Date: Fri Apr 20 17:12:43 2007 -0700 [NET]: Move sk_setup_caps() out of line. It is far too large to be an inline and not in any hot paths. Signed-off-by: Andi Kleen Signed-off-by: David S. Miller commit 5e96b0201ced4745aafa83bdd30df7c36efd67b1 Author: Andi Kleen Date: Fri Apr 20 17:11:46 2007 -0700 [TCP]: Uninline tcp_done(). The function is quite big and has several call sites and nothing to collapse by compiler optimization on inlining. Besides it's nicer to read in a in .c file. Signed-off-by: Andi Kleen Signed-off-by: David S. Miller commit 59c1ef7a7d602d631931a9a1604001a6f2973d7d Author: Stephen Hemminger Date: Fri Apr 20 17:09:22 2007 -0700 [NET]: cleanup extra semicolons Spring cleaning time... There seems to be a lot of places in the network code that have extra bogus semicolons after conditionals. Most commonly is a bogus semicolon after: switch() { } Signed-off-by: Stephen Hemminger Signed-off-by: David S. Miller commit bf8a342fbd687a3690e86ee3a858628274f8bb9a Author: Stephen Hemminger Date: Fri Apr 20 17:07:51 2007 -0700 [TCP]: TCP Illinois congestion control (rev3) This is an implementation of TCP Illinois invented by Shao Liu at University of Illinois. It is a another variant of Reno which adapts the alpha and beta parameters based on RTT. The basic idea is to increase window less rapidly as delay approaches the maximum. See the papers and talks to get a more complete description. Signed-off-by: Stephen Hemminger Signed-off-by: David S. Miller commit 8a4f29280e4c4984e2ff51b17f61ee9baad7fda6 Author: Stephen Hemminger Date: Fri Apr 20 17:02:45 2007 -0700 [NET]: Get rid of netdev_nit It isn't any faster to test a boolean global variable than do a simple check for empty list. Signed-off-by: Stephen Hemminger Signed-off-by: David S. Miller commit 029b4dccda5eb055ccf9df208c97bf0a7605743d Author: Michal Ostrowski Date: Fri Apr 20 16:59:24 2007 -0700 [PPPOE]: Fix device tear-down notification. pppoe_flush_dev() kicks all sockets bound to a device that is going down. In doing so, locks must be taken in the right order consistently (sock lock, followed by the pppoe_hash_lock). However, the scan process is based on us holding the sock lock. So, when something is found in the scan we must release the lock we're holding and grab the sock lock. This patch fixes race conditions between this code and pppoe_release(), both of which perform similar functions but would naturally prefer to grab locks in opposing orders. Both code paths are now going after these locks in a consistent manner. pppoe_hash_lock protects the contents of the "pppox_sock" objects that reside inside the hash. Thus, NULL'ing out the pppoe_dev field should be done under the protection of this lock. Signed-off-by: Michal Ostrowski Signed-off-by: David S. Miller commit f5cc04d92c6d1aa7b85c426768cb637f42978f18 Author: Florian Zumbiehl Date: Fri Apr 20 16:58:14 2007 -0700 [PPPOE]: memory leak when socket is release()d before PPPIOCGCHAN has been called on it below you find a patch that fixes a memory leak when a PPPoE socket is release()d after it has been connect()ed, but before the PPPIOCGCHAN ioctl ever has been called on it. This is somewhat of a security problem, too, since PPPoE sockets can be created by any user, so any user can easily allocate all the machine's RAM to non-swappable address space and thus DoS the system. Is there any specific reason for PPPoE sockets being available to any unprivileged process, BTW? After all, you need a packet socket for the discovery stage anyway, so it's unlikely that any unprivileged process will ever need to create a PPPoE socket, no? Allocating all session IDs for a known AC is a kind of DoS, too, after all - with Juniper ERXes, this is really easy, actually, since they don't ever assign session ids above 8000 ... Signed-off-by: Florian Zumbiehl Acked-by: Michal Ostrowski Signed-off-by: David S. Miller commit 1d6665ede664a999fc8b5183fa9cd1c17f734ca3 Author: Florian Zumbiehl Date: Fri Apr 20 16:57:27 2007 -0700 [PPPOE]: race between interface going down and connect() below you find a patch that (hopefully) fixes a race between an interface going down and a connect() to a peer on that interface. Before, connect() would determine that an interface is up, then the interface could go down and all entries referring to that interface in the item_hash_table would be marked as ZOMBIEs and their references to the device would be freed, and after that, connect() would put a new entry into the hash table referring to the device that meanwhile is down already - which also would cause unregister_netdevice() to wait until the socket has been release()d. This patch does not suffice if we are not allowed to accept connect()s referring to a device that we already acked a NETDEV_GOING_DOWN for (that is: all references are only guaranteed to be freed after NETDEV_DOWN has been acknowledged, not necessarily after the NETDEV_GOING_DOWN already). And if we are allowed to, we could avoid looking through the hash table upon NETDEV_GOING_DOWN completely and only do that once we get the NETDEV_DOWN ... mostrows: pppoe_flush_dev is called on NETDEV_GOING_DOWN and NETDEV_DOWN to deal with this "late connect" issue. Ideally one would hope to notify users at the "NETDEV_GOING_DOWN" phase (just to pretend to be nice). However, it is the NETDEV_DOWN scan that takes all the responsibility for ensuring nobody is hanging around at that time. Signed-off-by: Florian Zumbiehl Acked-by: Michal Ostrowski Signed-off-by: David S. Miller commit ac66d04181af6cc3aab8f39086e401c399c7061c Author: Florian Zumbiehl Date: Fri Apr 20 16:56:31 2007 -0700 [PPPoE]: miscellaneous smaller cleanups below is a patch that just removes dead code/initializers without any effect (first access is an assignment) that I stumbled accross while reading the source. Signed-off-by: Florian Zumbiehl Acked-by: Michal Ostrowski Signed-off-by: David S. Miller commit 8d25a06aa7ea95649a6abe2972d2cd4bca0be43b Author: Stephen Hemminger Date: Fri Apr 20 16:40:01 2007 -0700 [NET] skbuff: skb_store_bits const is backwards Getting warnings becuase skb_store_bits has skb as constant, but the function overwrites it. Looks like const was on the wrong side. Signed-off-by: Stephen Hemminger Signed-off-by: David S. Miller commit e41d2c0aa7159ce5e3634c2579403a399eeef2d5 Author: Stephen Hemminger Date: Fri Apr 20 16:39:17 2007 -0700 [BRIDGE]: Fix warning in net-2.6.22 The following is leftover from earlier change in net-2.6.22. Signed-off-by: Stephen Hemminger Signed-off-by: David S. Miller commit 34785a23b0bc66c16734a94b041e9d1910bc5386 Author: Ralf Baechle Date: Fri Apr 20 16:06:45 2007 -0700 [AX25/NETROM/ROSE]: Convert to use modern wait queue API Signed-off-by: Ralf Baechle Signed-off-by: David S. Miller commit d0613adf51470bcbbe8f2d8138e9992d6277b5f6 Author: Peter P. Waskiewicz Jr Date: Fri Apr 20 16:05:39 2007 -0700 [AF_PACKET]: Add option to return orig_dev to userspace. Add a packet socket option to allow the orig_dev index to be returned to userspace when passing traffic through a decapsulated device, such as the bonding driver. This is very useful for layer 2 traffic being able to report which physical device actually received the traffic, instead of having the encapsulating device hide that information. The new option is called PACKET_ORIGDEV. Signed-off-by: Peter P. Waskiewicz Jr. Signed-off-by: David S. Miller commit 21b95c22cffc6897a1df0f52272df3c8db9f8551 Author: YOSHIFUJI Hideaki Date: Fri Apr 20 15:57:45 2007 -0700 [IPV6] SNMP: Export statistics via netlink without CONFIG_PROC_FS. Signed-off-by: YOSHIFUJI Hideaki Signed-off-by: David S. Miller commit efed8f01bab2ba9e3a364c6d329d1d5fb612eec9 Author: YOSHIFUJI Hideaki Date: Fri Apr 20 15:57:15 2007 -0700 [IPV4] SNMP: Move some statistic bits to net/ipv4/proc.c. This also fixes memory leak in error path. Signed-off-by: YOSHIFUJI Hideaki Signed-off-by: David S. Miller commit 50695c9c1289de64f51b941f586fb7065991f628 Author: YOSHIFUJI Hideaki Date: Fri Apr 20 15:56:48 2007 -0700 [IPV6] SNMP: Move some statistic bits to net/ipv6/proc.c. Signed-off-by: YOSHIFUJI Hideaki Signed-off-by: David S. Miller commit 1eb3f59dac0aca5da4488b852e9993bce93cb874 Author: YOSHIFUJI Hideaki Date: Fri Apr 20 15:56:20 2007 -0700 [IPV6] SNMP: Netlink interface. Signed-off-by: YOSHIFUJI Hideaki Signed-off-by: David S. Miller commit 35bd649447c99e3fded2072335694bfbd5c588a8 Author: John Heffner Date: Fri Apr 20 15:53:27 2007 -0700 [INET]: Add IP(V6)_PMTUDISC_RPOBE Add IP(V6)_PMTUDISC_PROBE value for IP(V6)_MTU_DISCOVER. This option forces us not to fragment, but does not make use of the kernel path MTU discovery. That is, it allows for user-mode MTU probing (or, packetization-layer path MTU discovery). This is particularly useful for diagnostic utilities, like traceroute/tracepath. Signed-off-by: John Heffner Signed-off-by: David S. Miller commit b11cac9d7601d216a9baae927db04bc4635b8a71 Author: John Heffner Date: Fri Apr 20 15:52:39 2007 -0700 [IPV6]: MTU discovery check in ip6_fragment() Adds a check in ip6_fragment() mirroring ip_fragment() for packets that we can't fragment, and sends an ICMP Packet Too Big message in response. Signed-off-by: John Heffner Signed-off-by: David S. Miller commit 881249e8a935ca8893545f65ce0f7946d7305be5 Author: Patrick McHardy Date: Mon Apr 16 17:07:08 2007 -0700 [NET_SCHED]: ingress: switch back to using ingress_lock Switch ingress queueing back to use ingress_lock. qdisc_lock_tree now locks both the ingress and egress qdiscs on the device. All changes to data that might be used on both ingress and egress needs to be protected by using qdisc_lock_tree instead of manually taking dev->queue_lock. Additionally the qdisc stats_lock needs to be initialized to ingress_lock for ingress qdiscs. Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit 36528cc6e947bfa22a647eb7469bd91f87ac0b07 Author: Patrick McHardy Date: Mon Apr 16 17:02:10 2007 -0700 [NET_SCHED]: Eliminate qdisc_tree_lock Since we're now holding the rtnl during the entire dump operation, we can remove qdisc_tree_lock, whose only purpose is to protect dump callbacks from concurrent changes to the qdisc tree. Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit 0747bba1f8f57f9fb12d653704d21e37a0f85767 Author: Patrick McHardy Date: Mon Apr 16 17:00:53 2007 -0700 [RTNETLINK]: Remove unnecessary locking in dump callbacks Since we're now holding the rtnl during the entire dump operation, we can remove additional locking for rtnl protected data. This patch does that for all simple cases (dev_base_lock for dev_base walking, RCU protection for FIB rule dumping). Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit ddd2cc76aa92cf8e64f38d9b5fbe9e48a255b263 Author: Patrick McHardy Date: Mon Apr 16 16:59:10 2007 -0700 [RTNETLINK]: Hold rtnl_mutex during netlink dump callbacks Hold rtnl_mutex during the entire netlink dump operation. This allows to simplify locking in the dump callbacks, since they can now rely on that no concurrent changes happen. Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit 4de7c6d71aa1be674ece1128f76d24026cd7f24c Author: Patrick McHardy Date: Fri Apr 20 14:14:21 2007 -0700 [NETLINK]: Switch cb_lock spinlock to mutex and allow to override it Switch cb_lock to mutex and allow netlink kernel users to override it with a subsystem specific mutex for consistent locking in dump callbacks. All netlink_dump_start users have been audited not to rely on any side-effects of the previously used spinlock. Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit 48a29f66512860701874cb01b4931207f2614d71 Author: Patrick McHardy Date: Thu Apr 12 22:17:05 2007 -0700 [NETFILTER]: ipt_ULOG: add compat conversion functions Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit 7e073d06c403eb2fde519e31343dda80f19a90ff Author: Patrick McHardy Date: Thu Apr 12 22:16:38 2007 -0700 [NETFILTER]: nfnetlink_log: remove fallback to group 0 Don't fallback to group 0 if no instance can be found for the given group. This potentially confuses the listener and is not what the user configured. Also remove the ring buffer spamming that happens when rules are set up before the logging daemon is started. Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit 7de1efd18c5e7df22b47852e845024b0d64a01ca Author: Patrick McHardy Date: Thu Apr 12 22:16:18 2007 -0700 [NETFILTER]: {eb,ip6,ip}t_LOG: remove remains of LOG target overloading All LOG targets always use their internal logging function nowadays, so remove the incorrect error message and handle real errors (!= -EEXIST) by failing to load. Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit 75a7dddb6e81c500b4b1ed5e7eb8fb0cf4cf1a0d Author: Patrick McHardy Date: Thu Apr 12 22:15:50 2007 -0700 [NETFILTER]: nf_nat: use HW checksumming when possible When mangling packets forwarded to a HW checksumming capable device, offload recalculation of the checksum instead of doing it in software. Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit ba21bba2f536185d4e289e43d5e65c344c792055 Author: Bart De Schuymer Date: Thu Apr 12 22:15:06 2007 -0700 [NETFILTER]: ebt_arp: add gratuitous arp filtering The attached patch adds gratuitous arp filtering, more precisely: it allows checking that the IPv4 source address matches the IPv4 destination address inside the ARP header. It also adds a check for the hardware address type when matching MAC addresses (nothing critical, just for better consistency). Signed-off-by: Bart De Schuymer Acked-by: Carl-Daniel Hailfinger Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit d5558b8a247ed4f39e0e0c3737e3c1c470fd8820 Author: Michael Milner Date: Thu Apr 12 22:14:23 2007 -0700 [NETFILTER]: bridge-nf: filter bridged IPv4/IPv6 encapsulated in pppoe traffic The attached patch by Michael Milner adds support for using iptables and ip6tables on bridged traffic encapsulated in ppoe frames, similar to what's already supported for vlan. Signed-off-by: Michael Milner Signed-off-by: Bart De Schuymer Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit 0a17eaf4bf3726ccc890077896f17b072ec983f0 Author: Gerrit Renker Date: Fri Apr 20 13:57:21 2007 -0700 [DCCP]: Complete documentation of dccp_sock This fills in missing documentation for dccp_sock fields. Signed-off-by: Gerrit Renker Signed-off-by: Ian McDonald Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 67d5323af6b2623b373250366940c680bcff0399 Author: Gerrit Renker Date: Fri Apr 20 13:56:47 2007 -0700 [DCCP]: Debug statements for Elapsed Time option This prints the value of the parsed Elapsed Time when received via a Timestamp Echo option [RFC 4342, 13.3]. Signed-off-by: Gerrit Renker Acked-by: Ian McDonald Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 4099613b7e157f11250737fda099fdbb3885dfdc Author: Gerrit Renker Date: Fri Apr 20 13:02:55 2007 -0700 [DCCP]: Fix bug in the calculation of very low sending rates This fixes an error in the calculation of t_ipi when X converges towards very low sending rates (between 1 and 64 bytes per second). Although this case may not sound likely, it can be reproduced by connecting, hitting enter (1 byte sent) and waiting for some time, during which the nofeedback timer halves the sending rate until finally it reaches the region 1..64 bytes/sec. Computing X is handled correctly (tested separately); but by dividing X _before_ entering the calculation of t_ipi, X becomes zero as a result. This in turn triggers a BUG condition caught in scaled_div(). Fixed by replacing with equivalent statement and explicit typecast for good measure. Calculation verified and effect of patch tested - reduced never below 1 byte per 64 seconds afterwards, i.e. not allowing divide-by-zero. Signed-off-by: Gerrit Renker Acked-by: Ian McDonald Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit f6b0cf7551958f1b0e67e09e8a6b979895a72d94 Author: David S. Miller Date: Tue Apr 10 22:10:39 2007 -0700 [S390]: Fix build on 31-bit. Allow s390 to properly override the generic __div64_32() implementation by: 1) Using obj-y for div64.o in s390's makefile instead of lib-y 2) Adding the weak attribute to the generic implementation. Signed-off-by: David S. Miller commit a1ae3dd392b4dc47a1f30a11acff5998be79a387 Author: Patrick McHardy Date: Tue Apr 10 18:30:09 2007 -0700 [SK_BUFF]: Fix missing offset adjustment in skb_copy_expand skb_copy_expand changes the headroom, so it needs to adjust the header offsets by the difference between the old and the new value. Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit e5de72d3fa1f93eed260db0b5604d48cff75c56e Author: Eric Dumazet Date: Tue Apr 10 13:25:40 2007 -0700 [NET]: loopback driver can use loopback_dev integrated net_device_stats Rusty added a new 'stats' field to struct net_device. loopback driver can use it instead of declaring another struct net_device_stats This saves some memory. Signed-off-by: Eric Dumazet Signed-off-by: David S. Miller commit 5dfbf906f92181cbde1804c9840d46883f891cb5 Author: Akinobu Mita Date: Sat Apr 7 18:57:07 2007 +0900 bridge: check kmem_cache_create() error This patch checks kmem_cache_create() error and aborts loading module on failure. Signed-off-by: Akinobu Mita Signed-off-by: Stephen Hemminger commit c1b608cc00d14aa6ef5e9cf39a6928375e587a6d Author: Stephen Hemminger Date: Mon Apr 9 11:49:58 2007 -0700 bridge: allow changing hardware address to any valid address For case of bridging pseudo devices, the get created/destroyed (Xen) need to allow setting address to any valid value. Signed-off-by: Stephen Hemminger commit 98f08150beebfcab31a21cc1572d794aa0314b42 Author: Stephen Hemminger Date: Thu Mar 22 14:08:46 2007 -0700 bridge: change when netlink events go to STP Need to tell STP daemon about more events, like any time a device is added even when it is down. Signed-off-by: Stephen Hemminger commit eeec7aa1b8ec71f195aa3ff6b437d2e3bbe1f407 Author: Stephen Hemminger Date: Wed Mar 21 14:22:44 2007 -0700 bridge: add support for user mode STP This patchset based on work by Aji_Srinivas@emc.com provides allows spanning tree to be controled from userspace. Like hotplug, it uses call_usermodehelper when spanning tree is enabled so there is no visible API change. If call to start usermode STP fails it falls back to existing kernel STP. Signed-off-by: Stephen Hemminger commit f056a96d8f41896ed895b5e35c488d5090ac1cb8 Author: Stephen Hemminger Date: Mon Apr 9 12:57:54 2007 -0700 bridge: add sysfs hook to flush forwarding table The RSTP daemon needs to be able to flush all dynamic forwarding entries in the case of topology change. This is a temporary interface. It will change to a netlink interface before RSTP daemon is officially released. Signed-off-by: Stephen Hemminger commit e1b64926818e95d7a8e2c7d5d85f124f2147d006 Author: Stephen Hemminger Date: Wed Mar 21 13:42:33 2007 -0700 bridge: simpler hash with salt Instead of hashing the whole Ethernet address, it should be faster to just use the last 4 bytes. Add a random salt value to the hash to make it more difficult to construct worst case DoS hash chains. Signed-off-by: Stephen Hemminger commit e62116099f06766be21f9968c7e64fdc5d89523a Author: Stephen Hemminger Date: Wed Mar 21 13:42:06 2007 -0700 bridge: don't route packets while learning While in the STP learning state, don't route packets; wait until forwarding delay has expired. The purpose of the forwarding delay is to detect loops in the network, and if a brouter started up and started forwarding, it could cause a flood. Signed-off-by: Stephen Hemminger commit 52db5b97a0c6f1506439619dc5bad41152321987 Author: Stephen Hemminger Date: Wed Mar 21 13:38:47 2007 -0700 bridge: eliminate call by reference Change the bridging hook to be simple function with return value rather than modifying the skb argument. This could generate better code and is cleaner. Signed-off-by: Stephen Hemminger commit 8952d6c988ec31070732117f353666a4b9a09fea Author: Herbert Xu Date: Mon Apr 9 11:59:39 2007 -0700 [NET]: Treat CHECKSUM_PARTIAL as CHECKSUM_UNNECESSARY When a transmitted packet is looped back directly, CHECKSUM_PARTIAL maps to the semantics of CHECKSUM_UNNECESSARY. Therefore we should treat it as such in the stack. Signed-off-by: Herbert Xu Signed-off-by: David S. Miller commit 7f8be19f5a5737ce6ad670756183235c71b560bb Author: Herbert Xu Date: Mon Apr 9 11:59:07 2007 -0700 [NET]: Use csum_start offset instead of skb_transport_header The skb transport pointer is currently used to specify the start of the checksum region for transmit checksum offload. Unfortunately, the same pointer is also used during receive side processing. This creates a problem when we want to retransmit a received packet with partial checksums since the skb transport pointer would be overwritten. This patch solves this problem by creating a new 16-bit csum_start offset value to replace the skb transport header for the purpose of checksums. This offset is calculated from skb->head so that it does not have to change when skb->data changes. No extra space is required since csum_offset itself fits within a 16-bit word so we can use the other 16 bits for csum_start. For backwards compatibility, just before we push a packet with partial checksums off into the device driver, we set the skb transport header to what it would have been under the old scheme. Signed-off-by: Herbert Xu Signed-off-by: David S. Miller commit a4c520b2573721cb2b3a5dfe7da7dc27966e1e06 Author: Patrick McHardy Date: Mon Apr 9 11:47:58 2007 -0700 [XFRM]: beet: fix worst case header_len calculation esp_init_state doesn't account for the beet pseudo header in the header_len calculation, which may result in undersized skbs hitting xfrm4_beet_output, causing unnecessary reallocations in ip_finish_output2. The skbs should still always have enough room to avoid causing skb_under_panic in skb_push since we have at least 16 bytes available from LL_RESERVED_SPACE in xfrm_state_check_space. Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit 7acfeb04496b64bd82803f740776869f865fcf71 Author: Patrick McHardy Date: Mon Apr 9 11:47:18 2007 -0700 [XFRM]: Optimize MTU calculation Replace the probing based MTU estimation, which usually takes 2-3 iterations to find a fitting value and may underestimate the MTU, by an exact calculation. Also fix underestimation of the XFRM trailer_len, which causes unnecessary reallocations. Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit c649687f82ca967053a1c0b4fecdd5dd8dc99e4d Author: Patrick McHardy Date: Mon Apr 9 11:46:17 2007 -0700 [XFRM]: esp: fix skb_tail_pointer conversion bug Fix incorrect switch of "trailer" skb by "skb" during skb_tail_pointer conversion: - *(u8*)(trailer->tail - 1) = top_iph->protocol; + *(skb_tail_pointer(skb) - 1) = top_iph->protocol; - *(u8 *)(trailer->tail - 1) = *skb_network_header(skb); + *(skb_tail_pointer(skb) - 1) = *skb_network_header(skb); Signed-off-by: Patrick McHardy Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 0501783d4654c571f3c1d7c794d39eab0757b3ab Author: Patrick McHardy Date: Mon Apr 9 11:45:04 2007 -0700 [SK_BUFF]: Fix missing offset adjustment in pskb_expand_head Since we're increasing the headroom, the header offsets need to be increased by the same amount as well. Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit 43ff647a27d354dcf48d1d3a7e3ab3d8633f5690 Author: YOSHIFUJI Hideaki Date: Fri Apr 6 11:45:39 2007 -0700 [IPV6] FIB6RULE: Find source address during looking up route. When looking up route for destination with rules with source address restrictions, we may need to find a source address for the traffic if not given. Based on patch from Noriaki TAKAMIYA . Signed-off-by: YOSHIFUJI Hideaki Signed-off-by: David S. Miller commit 5cd26acecd237b48f9c1a7c0d460c5d2061264e9 Author: Patrick McHardy Date: Thu Apr 5 16:04:04 2007 -0700 [XFRM]: beet: minor cleanups Remove unnecessary initialization/variable. Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit 0dff7c616d642296b0562d244dec909716950e85 Author: Thomas Graf Date: Thu Apr 5 14:35:52 2007 -0700 [RTNL]: Improve error codes for unsupported operations The most common trigger of these errors is that the config option hasn't been enable wich would make the functionality available. Therefore returning EOPNOTSUPP gives a better idea on what is going wrong. Signed-off-by: Thomas Graf Signed-off-by: David S. Miller commit 79daa31b0f53c07076f223f497619adf224d3cb0 Author: David Howells Date: Mon Apr 2 20:19:53 2007 -0700 [NET]: Move generic skbuff stuff from XFRM code to generic code Move generic skbuff stuff from XFRM code to generic code so that AF_RXRPC can use it too. The kdoc comments I've attached to the functions needs to be checked by whoever wrote them as I had to make some guesses about the workings of these functions. Signed-off-By: David Howells Signed-off-by: David S. Miller commit 0cb8c01b96ace442f0397d394eef5c9534e1caa3 Author: Arnaldo Carvalho de Melo Date: Sat Mar 31 12:05:49 2007 -0300 [CREDITS]: Update Arnaldo entry Signed-off-by: Arnaldo Carvalho de Melo commit aa49979d86bc970312346119d5a7ade473bdda4d Author: Arnaldo Carvalho de Melo Date: Sat Mar 31 11:55:45 2007 -0300 [SK_BUFF]: Some more conversions to skb_copy_from_linear_data Signed-off-by: Arnaldo Carvalho de Melo commit c0b8323d6f6ef7da8cf508fdbb1ff5f57631697b Author: Arnaldo Carvalho de Melo Date: Sat Mar 31 11:55:19 2007 -0300 [SK_BUFF]: Introduce skb_copy_to_linear_data{_offset} To clearly state the intent of copying to linear sk_buffs, _offset being a overly long variant but interesting for the sake of saving some bytes. Signed-off-by: Arnaldo Carvalho de Melo commit b5dfe2dfed9a9d303d8214d2c54c1b7359ae62ec Author: David S. Miller Date: Thu Mar 29 19:16:03 2007 -0700 [NET]: Fix warnings in 3c523.c and ni52.c We have to put back the cast to "char *" because these pointers are volatile. Reported by Andrew Morton. Signed-off-by: David S. Miller commit fe5178c8b7bd9145317d293ec98c5a4436e3f451 Author: Rusty Russell Date: Wed Mar 28 14:29:08 2007 -0700 [NET]: Inline net_device_stats Network drivers which keep stats allocate their own stats structure then write a get_stats() function to return them. It would be nice if this were done by default. 1) Add a new "stats" field to "struct net_device". 2) Add a new feature field to say "this driver uses the internal one" 3) Have a default "get_stats" which returns NULL if that feature not set. 4) Change callers to check result of get_stats call for NULL, not if ->get_stats is set. This should not break backwards compatibility with older drivers, yet allow modern drivers to shed some boilerplate code. Lightly tested: works for a modified lguest network driver. Signed-off-by: Rusty Russell Signed-off-by: David S. Miller commit ef6c921e62d570e39568daa5d452f55aae2be4d2 Author: Eric Dumazet Date: Wed Mar 28 14:22:33 2007 -0700 [NET]: random functions can use nsec resolution instead of usec In order to get more randomness for secure_tcpv6_sequence_number(), secure_tcp_sequence_number(), secure_dccp_sequence_number() functions, we can use the high resolution time services, providing nanosec resolution. I've also done two kmalloc()/kzalloc() conversions. Signed-off-by: Eric Dumazet Acked-by: James Morris Signed-off-by: David S. Miller commit 1822105b9f7b01e1a9bc40c32d43abe2602aaf5c Author: Thomas Graf Date: Wed Mar 28 14:18:52 2007 -0700 [NET] fib_rules: delay route cache flush by ip_rt_min_delay Signed-off-by: Thomas Graf Signed-off-by: David S. Miller commit dfb68074c9af35ce5ed1cf06a8f163187314249d Author: Arnaldo Carvalho de Melo Date: Tue Mar 27 18:55:52 2007 -0300 [SK_BUFF]: Introduce skb_copy_from_linear_data{_offset} To clearly state the intent of copying from linear sk_buffs, _offset being a overly long variant but interesting for the sake of saving some bytes. Signed-off-by: Arnaldo Carvalho de Melo commit 0da15317a32ce9a514461acdde1ec1f08549f86b Author: Arnaldo Carvalho de Melo Date: Tue Mar 27 18:38:07 2007 -0300 [BLUETOOTH]: Introduce skb->data accessor methods for hci_{acl,event,sco}_hdr For consistency with other skb data accessors, reducing the number of direct accesses to skb->data. Signed-off-by: Arnaldo Carvalho de Melo commit a66b319f3ccc95b52f53e4b563aef10cb7446dda Author: Eric Dumazet Date: Tue Mar 27 14:18:34 2007 -0700 [IPV4]: align inet_protos[] on SMP As IPPROTO_TCP is 6, it makes sense to make sure inet_protos[] array is properly cache line aligned to avoid false sharing on SMP. c0680540 b peer_total c0680544 b inet_peer_unused_head c0680560 B inet_protos On i386 this example, we can see that inet_protos[IPPROTO_TCP] shares a potentially hot (and modified) cache line. Signed-off-by: Eric Dumazet Signed-off-by: David S. Miller commit b8bab27487b016cf5f68d5989ecbd11974cd1eaa Author: Eric Dumazet Date: Tue Mar 27 13:58:31 2007 -0700 [TCP]: tcp_memory_pressure and tcp_socket are__read_mostly candidates tcp_memory_pressure and tcp_socket currently share a cache line with tcp_memory_allocated, tcp_sockets_allocated. (Very hot cache line) It makes sense to declare these variables as __read_mostly, to avoid false sharing on SMP. ffffffff8081d9c0 B tcp_orphan_count ffffffff8081d9c4 B tcp_memory_allocated ffffffff8081d9c8 B tcp_sockets_allocated ffffffff8081d9cc B tcp_memory_pressure ffffffff8081d9d0 b tcp_md5sig_users ffffffff8081d9d8 b tcp_md5sig_pool ffffffff8081d9e0 b warntime.31570 ffffffff8081d9e8 b tcp_socket Signed-off-by: Eric Dumazet Signed-off-by: David S. Miller commit 1e6a36284b94d5129b5fe3152611662d58ecdb1a Author: Thomas Graf Date: Tue Mar 27 13:56:52 2007 -0700 [NET] fib_rules: Flush route cache after rule modifications The results of FIB rules lookups are cached in the routing cache except for IPv6 as no such cache exists. So far, it was the responsibility of the user to flush the cache after modifying any rules. This lead to many false bug reports due to misunderstanding of this concept. This patch automatically flushes the route cache after inserting or deleting a rule. Thanks to Muli Ben-Yehuda for catching a bug in the previous patch. Signed-off-by: Thomas Graf Signed-off-by: David S. Miller commit b647ed6fe4ee7e7a5c509fea47bf0cd217c23c87 Author: Eric Dumazet Date: Tue Mar 27 13:53:04 2007 -0700 [NET]: inet_ehash_secret should be __read_mostly and set only once There is a very tiny probability that build_ehash_secret() is called at the same time by different CPUS. Also, using __read_mostly is a must for inet_ehash_secret Signed-off-by: Eric Dumazet Signed-off-by: David S. Miller commit 5ec0b17a01e3d883bbd4c4cf4b4b01d74ba734d1 Author: Herbert Xu Date: Mon Mar 26 23:22:20 2007 -0700 [NET]: Allow forwarding of ip_summed except CHECKSUM_COMPLETE Right now Xen has a horrible hack that lets it forward packets with partial checksums. One of the reasons that CHECKSUM_PARTIAL and CHECKSUM_COMPLETE were added is so that we can get rid of this hack (where it creates two extra bits in the skbuff to essentially mirror ip_summed without being destroyed by the forwarding code). I had forgotten that I've already gone through all the deivce drivers last time around to make sure that they're looking at ip_summed == CHECKSUM_PARTIAL rather than ip_summed != 0 on transmit. In any case, I've now done that again so it should definitely be safe. Unfortunately nobody has yet added any code to update CHECKSUM_COMPLETE values on forward so we I'm setting that to CHECKSUM_NONE. This should be safe to remove for bridging but I'd like to check that code path first. So here is the patch that lets us get rid of the hack by preserving ip_summed (mostly) on forwarded packets. Signed-off-by: Herbert Xu Signed-off-by: David S. Miller commit 660b93c066b319db8a99eb918c919f03cbd55eed Author: Janusz Krzysztofik Date: Mon Mar 26 18:03:44 2007 -0700 [IPV4] LVS: Allow to send ICMP unreachable responses when real-servers are removed this is a small patch by Janusz Krzysztofik to ip_route_output_slow() that allows VIP-less LVS linux director to generate packets originating >From VIP if sysctl_ip_nonlocal_bind is set. In a nutshell, the intention is for an LVS linux director to be able to send ICMP unreachable responses to end-users when real-servers are removed. http://archive.linuxvirtualserver.org/html/lvs-users/2007-01/msg00106.html Signed-off-by: Simon Horman Signed-off-by: David S. Miller commit 4b93f45b124c2df3777586bea989a8625d9135c0 Author: Thomas Graf Date: Mon Mar 26 17:38:53 2007 -0700 [NET] fib_rules: Add no-operation action The use of nop rules simplifies the usage of goto rules and adds more flexibility as they allow targets to remain while the actual content of the branches can change easly. Signed-off-by: Thomas Graf Signed-off-by: David S. Miller commit 273a780ff6e9774755f5498b3e248ba5d59dcd7d Author: Thomas Graf Date: Mon Mar 26 17:37:59 2007 -0700 [NET] fib_rules: Mark rules detached from the device Rules which match against device names in their selector can remain while the device itself disappears, in fact the device doesn't have to present when the rule is added in the first place. The device name is resolved by trying when the rule is added and later by listening to NETDEV_REGISTER/UNREGISTER notifications. This patch adds the flag FIB_RULE_DEV_DETACHED which is set towards userspace when a rule contains a device match which is unresolved at the moment. This eases spotting the reason why certain rules seem not to function properly. Signed-off-by: Thomas Graf Signed-off-by: David S. Miller commit 0de5d56e50b687a22e831929497aa46688e6dfe5 Author: Thomas Graf Date: Mon Mar 26 17:14:15 2007 -0700 [NET] fib_rules: goto rule action This patch adds a new rule action FR_ACT_GOTO which allows to skip a set of rules by jumping to another rule. The rule to jump to is specified via the FRA_GOTO attribute which carries a rule preference. Referring to a rule which doesn't exists is explicitely allowed. Such goto rules are marked with the flag FIB_RULE_UNRESOLVED and will act like a rule with a non-matching selector. The rule will become functional as soon as its target is present. The goto action enables performance optimizations by reducing the average number of rules that have to be passed per lookup. Example: 0: from all lookup local 40: not from all to 192.168.23.128 goto 32766 41: from all fwmark 0xa blackhole 42: from all fwmark 0xff blackhole 32766: from all lookup main Signed-off-by: Thomas Graf Signed-off-by: David S. Miller commit 7fd48a0e7d4314722142fe99c87c25f0f07ddbc1 Author: David S. Miller Date: Mon Mar 26 02:00:58 2007 -0700 [WAN] cosa.c: Build fix. Caused by skb_reset_mac_header() changes, missing semicolon. Signed-off-by: David S. Miller commit 68ed4a73006f3e9c0d75abb7bf2928c4387a2fb9 Author: Stephen Hemminger Date: Sat Mar 24 21:35:33 2007 -0700 [TCP] tcp_probe: improvements for net-2.6.22 Change tcp_probe to use ktime (needed to add one export). Add option to only get events when cwnd changes - from Doug Leith Signed-off-by: Stephen Hemminger Signed-off-by: David S. Miller commit 045860c1b8e81c47bc5572fb8d43a943aa1a4436 Author: Stephen Hemminger Date: Sat Mar 24 21:34:38 2007 -0700 [TCP]: cubic update for net-2.6.22 The following update received from Injong updates TCP cubic to the latest version. I am running more complete tests and will have results after 4/1. According to Injong: the new version improves on its scalability, fairness and stability. So in all properties, we confirmed it shows better performance. NCSU results (for 2.6.18 and 2.6.20) available: http://netsrv.csc.ncsu.edu/wiki/index.php/TCP_Testing This version is described in a new Internet draft for CUBIC. http://www.ietf.org/internet-drafts/draft-rhee-tcp-cubic-00.txt Signed-off-by: Stephen Hemminger Signed-off-by: David S. Miller commit 6ed88af8f0726d51900229435ef03b8ad0a5a239 Author: John Heffner Date: Sun Mar 25 23:32:29 2007 -0700 [NET] Move DF check to ip_forward Do fragmentation check in ip_forward, similar to ipv6 forwarding. Signed-off-by: John Heffner Signed-off-by: David S. Miller commit e0493bd1d467538c1c7665638665f95d8a346f17 Author: David S. Miller Date: Fri Mar 23 11:40:27 2007 -0700 [INET]: Use jhash + random secret for ehash. The days are gone when this was not an issue, there are folks out there with huge bot networks that can be used to attack the established hash tables on remote systems. So just like the routing cache and connection tracking hash, use Jenkins hash with random secret input. Signed-off-by: David S. Miller commit 673259702d54853e58c00a69bbcf76b574c1d542 Author: Johannes Berg Date: Fri Mar 23 11:37:48 2007 -0700 [NETLINK]: introduce NLA_BINARY type This patch introduces a new NLA_BINARY attribute policy type with the verification of simply checking the maximum length of the payload. It also fixes a small typo in the example. Signed-off-by: Johannes Berg Signed-off-by: Thomas Graf Signed-off-by: David S. Miller commit 42989b7341f0293f76e88e343813c630a2456c0c Author: Vlad Yasevich Date: Fri Mar 23 11:34:36 2007 -0700 [SCTP]: Implement SCTP_MAX_BURST socket option. Signed-off-by: Vlad Yasevich Signed-off-by: David S. Miller commit 185ead0f6b4a7143339583b7707115b4b4db437a Author: Vlad Yasevich Date: Fri Mar 23 11:34:08 2007 -0700 [SCTP]: Implement sac_info field in SCTP_ASSOC_CHANGE notification. As stated in the sctp socket api draft: sac_info: variable If the sac_state is SCTP_COMM_LOST and an ABORT chunk was received for this association, sac_info[] contains the complete ABORT chunk as defined in the SCTP specification RFC2960 [RFC2960] section 3.3.7. We now save received ABORT chunks into the sac_info field and pass that to the user. Signed-off-by: Vlad Yasevich Signed-off-by: David S. Miller commit 815c3d5f96410461bc5ad9f019291acedee26321 Author: Vlad Yasevich Date: Fri Mar 23 11:33:12 2007 -0700 [SCTP]: Honor flags when setting peer address parameters Parameters only take effect when a corresponding flag bit is set and a value is specified. This means we need to check the flags in addition to checking for non-zero value. Signed-off-by: Vlad Yasevich Signed-off-by: David S. Miller commit f84bad7c3027c1b58e884baf477d9c483a794e1a Author: Vlad Yasevich Date: Fri Mar 23 11:32:26 2007 -0700 [SCTP]: Implement SCTP_ADDR_CONFIRMED state for ADDR_CHNAGE event Signed-off-by: Vlad Yasevich Signed-off-by: David S. Miller commit 73eed9e55a27d2bc9ea2dd79bc86cee951ac3756 Author: Vlad Yasevich Date: Fri Mar 23 11:32:00 2007 -0700 [SCTP]: Implement SCTP_PARTIAL_DELIVERY_POINT option. This option induces partial delivery to run as soon as the specified amount of data has been accumulated on the association. However, we give preference to fully reassembled messages over PD messages. In any case, window and buffer is freed up. Signed-off-by: Vlad Yasevich Signed-off-by: David S. Miller commit b5276634059ce879adc8afa52f4368880a94d2c1 Author: Vlad Yasevich Date: Fri Apr 20 12:23:15 2007 -0700 [SCTP]: Implement SCTP_FRAGMENT_INTERLEAVE socket option This option was introduced in draft-ietf-tsvwg-sctpsocket-13. It prevents head-of-line blocking in the case of one-to-many endpoint. Applications enabling this option really must enable SCTP_SNDRCV event so that they would know where the data belongs. Based on an earlier patch by Ivan Skytte Jørgensen. Additionally, this functionality now permits multiple associations on the same endpoint to enter Partial Delivery. Applications should be extra careful, when using this functionality, to track EOR indicators. Signed-off-by: Vlad Yasevich Signed-off-by: David S. Miller commit 090470768cf6b09b965e790695d1118554b9cb9f Author: Patrick McHardy Date: Fri Mar 23 11:30:04 2007 -0700 [NET_SCHED]: qdisc: remove unnecessary memory barriers We're holding dev->queue_lock in qdisc_watchdog_schedule and qdisc_watchdog_cancel, no need for the barriers. Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit 715e70f41c818c0c5b92344a916b79898f553797 Author: Patrick McHardy Date: Fri Mar 23 11:29:43 2007 -0700 [NET_SCHED]: Unline tcf_destroy Uninline tcf_destroy and add a helper function to destroy an entire filter chain. Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit 59971af4396350d619836659ae646f720ea866d4 Author: Patrick McHardy Date: Fri Mar 23 11:29:25 2007 -0700 [NET_SCHED]: turn PSCHED_GET_TIME into inline function Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit 3073f889ce5564628126121132341b29297eefea Author: Patrick McHardy Date: Fri Mar 23 11:29:11 2007 -0700 [NET_SCHED]: turn PSCHED_TDIFF_SAFE into inline function Also rename to psched_tdiff_bounded. Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit 144f5e3c448323a7bba7d5030a24aebfa270e13d Author: Patrick McHardy Date: Fri Mar 23 11:28:55 2007 -0700 [NET_SCHED]: kill PSCHED_TDIFF Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit 03ceaab1d7e27e0b27a78b542b3965e627af6f92 Author: Patrick McHardy Date: Fri Mar 23 11:28:30 2007 -0700 [NET_SCHED]: kill PSCHED_SET_PASTPERFECT/PSCHED_IS_PASTPERFECT Use direct assignment and comparison instead. Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit fc9e902d2cf094cfb7156c838589f5fdbd800332 Author: Patrick McHardy Date: Fri Mar 23 11:28:07 2007 -0700 [NET_SCHED]: kill PSCHED_TLESS Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit bffe2ca94a400f01d35f6624e6701f01e54369ea Author: Patrick McHardy Date: Fri Mar 23 11:27:45 2007 -0700 [NET_SCHED]: kill PSCHED_TADD/PSCHED_TADD2 Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit e9b3395ea07fb742355642a8c5f6ba1de88d7f06 Author: Patrick McHardy Date: Fri Mar 23 11:27:29 2007 -0700 [NET_SCHED]: kill PSCHED_AUDIT_TDIFF Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit ce05e014e72e39c1858f102095e62a0b79cca8fb Author: Patrick McHardy Date: Fri Mar 23 11:27:04 2007 -0700 [NET_SCHED]: sch_netem: fix off-by-one in send time comparison netem checks PSCHED_TLESS(cb->time_to_send, now) to find out whether it is allowed to send a packet, which is equivalent to cb->time_to_send < now. Use !PSCHED_TLESS(now, cb->time_to_send) instead to properly handle cb->time_to_send == now. Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit bf5d66348d3d83a47b5297c5eeadd9f305bce00d Author: Thomas Graf Date: Fri Mar 23 11:17:57 2007 -0700 [NETFILTER] nfnetlink: netlink_run_queue() already checks for NLM_F_REQUEST Patrick has made use of netlink_run_queue() in nfnetlink while my patches have been waiting for net-2.6.22 to open. So this check for NLM_F_REQUEST can go as well. Signed-off-by: Thomas Graf Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit d9c095d5e86bf8067b085d136f726ce3cd1ba70a Author: Yasuyuki Kozakai Date: Fri Mar 23 11:17:27 2007 -0700 [NETFILTER]: nf_conntrack: kill destroy() in struct nf_conntrack for diet The destructor per conntrack is unnecessary, then this replaces it with system wide destructor. Signed-off-by: Yasuyuki Kozakai Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit 64a39c79f9db447100e10c69d04902bdbcf28541 Author: Yasuyuki Kozakai Date: Fri Mar 23 11:17:07 2007 -0700 [NETFILTER]: nf_conntrack: don't use nfct in skb if conntrack is disabled Signed-off-by: Yasuyuki Kozakai Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit b6497fe79ace713b2296a90a085c81542314f3a4 Author: Patrick McHardy Date: Fri Mar 23 11:16:30 2007 -0700 [NETFILTER]: Use setup_timer Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit ef14e74ebcfdb1682cce614c7cfbcd22ac68343b Author: Patrick McHardy Date: Fri Mar 23 11:12:50 2007 -0700 [NETFILTER]: nfnetlink_log: remove conditional locking This is gross, have the wrapper function take the lock. Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit c735cdd260f5705b7159053f14ccfe9984478c97 Author: Michal Miroslaw Date: Fri Mar 23 11:12:21 2007 -0700 [NETFILTER]: nfnetlink_log: micro-optimization: inst->skb != NULL in __nfulnl_send() No other function calls __nfulnl_send() with inst->skb == NULL than nfulnl_timer(). Signed-off-by: Michal Miroslaw Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit 890a90e5916443a0ff29a61fb2301c11801394cb Author: Michal Miroslaw Date: Fri Mar 23 11:12:03 2007 -0700 [NETFILTER]: nfnetlink_log: iterator functions need iter_state * only get_*() don't need access to seq_file - iter_state is enough for them. Signed-off-by: Michal Miroslaw Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit 6a7fa3252b42d114a6554919fc0959214f29ea83 Author: Michal Miroslaw Date: Fri Mar 23 11:11:48 2007 -0700 [NETFILTER]: nfnetlink_log: micro-optimization: don't modify destroyed instance Simple micro-optimization: Don't change any options if the instance is being destroyed. Signed-off-by: Michal Miroslaw Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit ec5c98800ac7c32a73d47d5b45efff40b5ad831b Author: Michal Miroslaw Date: Fri Mar 23 11:11:31 2007 -0700 [NETFILTER]: nfnetlink_log: micro-optimization for inst==NULL in nfulnl_recv_config() Simple micro-optimization: don't call instance_put() on known NULL pointers. Signed-off-by: Michal Miroslaw Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit 7d62b06a492faf339d91fa7cfb570d29cdabee94 Author: Michal Miroslaw Date: Fri Mar 23 11:11:05 2007 -0700 [NETFILTER]: nfnetlink_log: kill duplicate code Kill some duplicate code in nfulnl_log_packet(). Signed-off-by: Michal Miroslaw Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit 5415ed696094d2be4e9253616c57313b2557adf0 Author: Michal Miroslaw Date: Fri Mar 23 11:10:47 2007 -0700 [NETFILTER]: nfnetlink_log: don't count max(a,b) twice We don't need local nlbufsiz (skb size) as nfulnl_alloc_skb() takes the maximum anyway. Signed-off-by: Michal Miroslaw Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit fb26673e010b5a0820f88e914393037592b0479a Author: Patrick McHardy Date: Fri Mar 23 11:10:13 2007 -0700 [NETFILTER]: Remove changelogs and CVS IDs Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit 38e9b59c8e927664ddde76e4263184dfb05b329a Author: Stephen Hemminger Date: Fri Mar 23 00:12:09 2007 -0700 [NETEM]: spelling errors Get rid of some of my creative spelling. Signed-off-by: Stephen Hemminger Signed-off-by: David S. Miller commit 5c257dbcf7c92d67988a0414d3afbf595bbf7096 Author: Thomas Graf Date: Thu Mar 22 23:30:55 2007 -0700 [NETLINK]: Directly return -EINTR from netlink_dump_start() Now that all users of netlink_dump_start() use netlink_run_queue() to process the receive queue, it is possible to return -EINTR from netlink_dump_start() directly, therefore simplying the callers. Signed-off-by: Thomas Graf Signed-off-by: David S. Miller commit bee03febc96aa9e631a15a5bef108cf04d2c07c1 Author: Thomas Graf Date: Thu Mar 22 23:30:35 2007 -0700 [IPv4] diag: Use netlink_run_queue() to process the receive queue Makes use of netlink_run_queue() to process the receive queue and converts inet_diag_rcv_msg() to use the type safe netlink interface. Signed-off-by: Thomas Graf Signed-off-by: David S. Miller commit d9d35beee870ddbfe7d00ce71c95478db9eaa92d Author: Thomas Graf Date: Thu Mar 22 23:30:12 2007 -0700 [NETLINK]: Remove error pointer from netlink message handler The error pointer argument in netlink message handlers is used to signal the special case where processing has to be interrupted because a dump was started but no error happened. Instead it is simpler and more clear to return -EINTR and have netlink_run_queue() deal with getting the queue right. nfnetlink passed on this error pointer to its subsystem handlers but only uses it to signal the start of a netlink dump. Therefore it can be removed there as well. This patch also cleans up the error handling in the affected message handlers to be consistent since it had to be touched anyway. Signed-off-by: Thomas Graf Signed-off-by: David S. Miller commit 8adef93e817d35240d90be2428ea052960e1f0d9 Author: Thomas Graf Date: Thu Mar 22 23:29:10 2007 -0700 [NETLINK]: Ignore control messages directly in netlink_run_queue() Changes netlink_rcv_skb() to skip netlink controll messages and don't pass them on to the message handler. Signed-off-by: Thomas Graf Signed-off-by: David S. Miller commit 41f3281e71b4e9a5b9e0b3d3340904858a3835d6 Author: Thomas Graf Date: Thu Mar 22 23:28:46 2007 -0700 [NETLINK]: Ignore !NLM_F_REQUEST messages directly in netlink_run_queue() netlink_rcv_skb() is changed to skip messages which don't have the NLM_F_REQUEST bit to avoid every netlink family having to perform this check on their own. Signed-off-by: Thomas Graf Signed-off-by: David S. Miller commit f586c68d6117273a2d97ea47927f126ac960e60e Author: Thomas Graf Date: Thu Mar 22 23:27:39 2007 -0700 [NETLINK]: Remove unused groups variable Leftover from dynamic multicast groups allocation work. Signed-off-by: Thomas Graf Signed-off-by: David S. Miller commit 13cad6cc21c99551023a5ebbe267e167fddfcb3f Author: Thomas Graf Date: Thu Mar 22 23:27:19 2007 -0700 [TCP] westwood: Use type safe netlink interface Signed-off-by: Thomas Graf Signed-off-by: David S. Miller commit ab8f53df37cd70edf2e28bedd1abd93a5898b1c6 Author: Thomas Graf Date: Thu Mar 22 23:27:01 2007 -0700 [TCP] vegas: Use type safe netlink interface Signed-off-by: Thomas Graf Signed-off-by: David S. Miller commit 9733cda5528bcdc0154d7722562fd208b8aa5588 Author: Thomas Graf Date: Thu Mar 22 21:41:06 2007 -0700 [RTNL]: Properly return rntl message handler Signed-off-by: Thomas Graf Signed-off-by: David S. Miller commit e1d03d27b65421f977d1038de3adeae4c4fd795e Author: Stephen Hemminger Date: Thu Mar 22 12:18:35 2007 -0700 [NET_SCHED] qdisc: avoid transmit softirq on watchdog wakeup If possible, avoid having to do a transmit softirq when a qdisc watchdog decides to re-enable. The watchdog routine runs off a timer, so it is already in the same effective context as the softirq. Signed-off-by: Stephen Hemminger Signed-off-by: David S. Miller commit d890e956b5c22c080bae204c9252435ee97fd17c Author: Stephen Hemminger Date: Thu Mar 22 12:17:42 2007 -0700 [NETEM]: avoid excessive requeues The netem code would call getnstimeofday() and dequeue/requeue after every packet, even if it was waiting. Avoid this overhead by using the throttled flag. Signed-off-by: Stephen Hemminger Signed-off-by: David S. Miller commit d3a0055da11e93d9ee8c730b55065f9477528faf Author: Stephen Hemminger Date: Thu Mar 22 12:17:05 2007 -0700 [NETEM]: Optimize tfifo In most cases, the next packet will be sent after the last one. So optimize that case. Signed-off-by: Stephen Hemminger Signed-off-by: David S. Miller commit 384993bb4dc341fbf6669a72de0f06ad24d8b153 Author: Stephen Hemminger Date: Thu Mar 22 12:16:21 2007 -0700 [NETEM]: use better types for time values The random number generator always generates 32 bit values. The time values are limited by psched_tdiff_t Signed-off-by: Stephen Hemminger Signed-off-by: David S. Miller commit 0c906b1911a7ee4162a1d2d8347d12dfd54736dc Author: Stephen Hemminger Date: Thu Mar 22 12:15:45 2007 -0700 [NETEM]: report reorder percent correctly. If you setup netem to just delay packets; "tc qdisc ls" will report the reordering as 100%. Well it's a lie, reorder isn't used unless gap is set, so just set value to 0 so the output of utility is correct. Signed-off-by: Stephen Hemminger Signed-off-by: David S. Miller commit 3bf514b4241a825f33b55e434cd5002dc71041a9 Author: Stephen Hemminger Date: Thu Mar 22 12:10:58 2007 -0700 [TCP]: cubic optimization Use willy's work in optimizing cube root by having table for small values. Signed-off-by: Stephen Hemminger Signed-off-by: David S. Miller commit a185e86cb83bcb345fe359f14690689393919d2e Author: Stephen Hemminger Date: Thu Mar 22 12:10:18 2007 -0700 [LIB]: div64_64 optimization Minor optimization of div64_64. do_div() already does optimization for the case of 32 by 32 divide, so no need to do it here. Signed-off-by: Stephen Hemminger Signed-off-by: David S. Miller commit 89862814dfad9c5a8556cc81730b0509c3330226 Author: Thomas Graf Date: Sun Mar 25 23:24:24 2007 -0700 [NET] rules: Unified rules dumping Implements a unified, protocol independant rules dumping function which is capable of both, dumping a specific protocol family or all of them. This speeds up dumping as less lookups are required. Signed-off-by: Thomas Graf Signed-off-by: David S. Miller commit e7ad56720557b86e3acd84f9fc802a42b35a0434 Author: Thomas Graf Date: Thu Mar 22 11:59:42 2007 -0700 [RTNL]: Use rtnl registration interface for dump-all aliases Signed-off-by: Thomas Graf Signed-off-by: David S. Miller commit 0e0f40790fe0242431b45066fe7a895d04cb60a3 Author: Thomas Graf Date: Thu Mar 22 11:59:03 2007 -0700 [BRIDGE]: Use rtnl registration interface Signed-off-by: Thomas Graf Signed-off-by: David S. Miller commit 6b7f2e0085aa167cfb80b5d4c9a8488e5ce64bfa Author: Thomas Graf Date: Thu Mar 22 11:58:32 2007 -0700 [IPv6]: Use rtnl registration interface Signed-off-by: Thomas Graf Signed-off-by: David S. Miller commit 0b909e39f670bb4979b291c35615ff014306288b Author: Thomas Graf Date: Thu Mar 22 11:57:46 2007 -0700 [DECNet]: Use rtnl registration interface Signed-off-by: Thomas Graf Acked-by: Steven Whitehouse Signed-off-by: David S. Miller commit 27750a6e829dc2fd98fbf4db68a152c0a6798277 Author: Thomas Graf Date: Thu Mar 22 11:56:59 2007 -0700 [PKT_SCHED] act: Use rtnl registration interface Signed-off-by: Thomas Graf Signed-off-by: David S. Miller commit 59bcf5a0e31459eabea0706ec7c93d20e2b9f6d4 Author: Thomas Graf Date: Thu Mar 22 11:56:22 2007 -0700 [PKT_SCHED] cls: Use rtnl registration interface Signed-off-by: Thomas Graf Signed-off-by: David S. Miller commit b556b7b860ef384986f057e64da94d9df38ae398 Author: Thomas Graf Date: Thu Mar 22 11:55:50 2007 -0700 [PKT_SCHED] qdisc: Use rtnl registration interface Signed-off-by: Thomas Graf Signed-off-by: David S. Miller commit 2cf7a3a606439ebc2ead888360ec03b1cf190037 Author: Thomas Graf Date: Thu Mar 22 11:55:17 2007 -0700 [IPv4]: Use rtnl registration interface Signed-off-by: Thomas Graf Signed-off-by: David S. Miller commit d9e18a7b5b82c14bbf985bcdda434c0362898012 Author: Thomas Graf Date: Sun Mar 25 23:20:05 2007 -0700 [NET] rules: Use rtnl registration interface Signed-off-by: Thomas Graf Signed-off-by: David S. Miller commit 4599bc58b909f48a99e10ea892bdc4d4fd18d20b Author: Thomas Graf Date: Thu Mar 22 11:50:06 2007 -0700 [NEIGH]: Use rtnl registration interface Signed-off-by: Thomas Graf Signed-off-by: David S. Miller commit 4bda00f35fe617f4aaeed4f7085e6b778e13ea9a Author: Thomas Graf Date: Thu Mar 22 11:49:22 2007 -0700 [NET] link: Use rtnl registration interface Signed-off-by: Thomas Graf Signed-off-by: David S. Miller commit cbc661a00bbc622863dcd6aacc9539455ceabd12 Author: Thomas Graf Date: Thu Mar 22 11:48:11 2007 -0700 [RTNL]: Message handler registration interface This patch adds a new interface to register rtnetlink message handlers replacing the exported rtnl_links[] array which required many message handlers to be exported unnecessarly. Signed-off-by: Thomas Graf Signed-off-by: David S. Miller commit 7a178a9546d3a401501dbfdfa35fc4e7f305accd Author: Gerrit Renker Date: Tue Mar 20 15:31:56 2007 -0300 [CCID3]: Use initial RTT sample from SYN exchange The patch follows the following recommendation made in an erratum to RFC 4342: "Senders MAY additionally make use of other available RTT measurements, including those from the initial Request-Response packet exchange." It implements larger initial windows with regard to this inital RTT measurement, using the mechanism suggested in draft-ietf-dccp-rfc3448bis, section 4.2. Signed-off-by: Gerrit Renker Signed-off-by: Ian McDonald Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 0d344269d8540de9939d8a73d781d5f3f1aff475 Author: Gerrit Renker Date: Tue Mar 20 15:27:17 2007 -0300 [DCCP]: Sample RTT from SYN exchange Function: commit 1ffbeec75a4477cba6533bd1bdc54219e317b9e1 Author: Gerrit Renker Date: Tue Mar 20 15:24:37 2007 -0300 [CCID3]: Use function for RTT sampling This replaces the existing occurrences of RTT sampling with the use of the new function dccp_sample_rtt. Signed-off-by: Gerrit Renker Signed-off-by: Ian McDonald Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 7f6d4136ea8b9cba7e3b25be8e3d3da6e4780863 Author: Gerrit Renker Date: Tue Mar 20 15:23:18 2007 -0300 [DCCP]: Provide function for RTT sampling A recurring problem, in particular in the CCID code, is that RTT samples from packets with timestamp echo and elapsed time options need to be taken. This service is provided via a new function dccp_sample_rtt in this patch. Furthermore, to protect against `insane' RTT samples, the sampled value is bounded between 100 microseconds and 4 seconds - for which u32 is sufficient. Signed-off-by: Gerrit Renker Signed-off-by: Ian McDonald Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 263e71c4f37ef7269adda50e169f88dadf3d3e56 Author: Gerrit Renker Date: Tue Mar 20 15:19:07 2007 -0300 [CCID3]: Handle Idle and Application-Limited periods This updates the code with regard to handling idle and application-limited periods as specified in [RFC 4342, 5.1]. Background: commit 2ed722f31f6643f68358256ea0be6566011e180f Author: Gerrit Renker Date: Tue Mar 20 15:12:10 2007 -0300 [CCID3]: Wrap computation of RFC3390-initial rate into separate function The CCID 3 and TFRC specs (RFC 4342, RFC 3448, draft-3448bis) make frequent reference to the computation of the RFC-3390 initial sending rate: 1. Initial sending rate when RTT is known (RFC 4342, p. 6) 2. Response to Idle/Application-Limited periods (RFC 4342, 5.1) This warrants putting the code into its own function, for later code reuse. Signed-off-by: Gerrit Renker Signed-off-by: Ian McDonald Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 52787aad8183fbe5090fa3ac24da6584f3048f30 Author: Gerrit Renker Date: Tue Mar 20 15:04:30 2007 -0300 [CCID3]: Remove build warnings for 64bit This clears the following sparc64 build warnings: 1) warning: format "%ld" expects type "long int", but argument 3 has type "suseconds_t" 2) warning: format "%llu" expects type "long long unsigned int", but argument 3 has type "__u64" Fixed by using typecast to unsigned. This is argued to be safe, since the quantities, after de-scaling (factor 2^6) fit all in u32. Signed-off-by: Gerrit Renker Signed-off-by: Ian McDonald Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 7bcb97276cae9eb8f7c4d84c609d19b4db6c117c Author: Gerrit Renker Date: Tue Mar 20 15:02:10 2007 -0300 [CCID3]: More to see in dccp_probe This adds a few more fields of interest to /proc/net/dccpprobe, the following output ensues: 1 2 3 4 5 6 7 8 9 10 11 sec.usec src:sport dst:dport size s rtt p X_calc X_recv X t_ipi Also made the formatting consistent. Scripts that go with this can be downloaded from http://139.133.210.30/users/gerrit/dccp/dccp_probe/ Signed-off-by: Gerrit Renker Acked-by: Ian McDonald Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit f7fdf9e0c4d8c30d10ce5cf1d501defff6dd1bcc Author: Gerrit Renker Date: Tue Mar 20 15:01:14 2007 -0300 [CCID3]: Add documentation for socket options This updates the documentation on CCID3-specific options. Signed-off-by: Gerrit Renker Acked-by: Ian McDonald Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit fd55779150d4924a59f94c03389fe982f8366a7f Author: Gerrit Renker Date: Tue Mar 20 15:00:28 2007 -0300 [DCCP]: More debug information for dccp_wait_for_ccid This adds more detail in the wait_for_ccid packet scheduling loop. In particular, it informs about (i) when delay is used and (ii) why a packet is discarded. Signed-off-by: Gerrit Renker Signed-off-by: Ian McDonald Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 1d48589509af9a299f2d6ae631f4bb9850926c84 Author: Gerrit Renker Date: Tue Mar 20 14:59:23 2007 -0300 [DCCP]: Always use debug-toggle parameters Currently debugging output (when configured) is automatically enabled when DCCP modules are compiled into the kernel rather than built as loadable modules. This is not necessary, since the module parameters in this case become kernel commandline parameters, e.g. DCCP or CCID3 debug output can be enabled for a static build by appending the following at the boot prompt: dccp.dccp_debug=1 dccp_ccid3.ccid3_debug=1 This patch therefore does away with the more complicated way of always enabling debug output for static builds Signed-off-by: Gerrit Renker Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit adb15ec2569921bddb9c3802dacf42f90c16a115 Author: Gerrit Renker Date: Tue Mar 20 14:56:11 2007 -0300 [CCID3]: Remove race condition and update t_ipi when `s' changes This: 1. removes a race condition in the access to the scheduled send time t_nom which results from allowing asynchronous r/w access to t_nom without locks; 2. updates the inter-packet interval t_ipi = s/X when `s' changes, following a suggestion by Ian McDonald. Signed-off-by: Gerrit Renker Acked-by: Ian McDonald Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit ea6878f2eacfcb02218731fec4b8dd1c5bc25740 Author: Ian McDonald Date: Tue Mar 20 14:49:20 2007 -0300 [CCID3]: More verbose debugging This adds a few debugging statements to ccid3.c Signed-off-by: Ian McDonald Signed-off-by: Gerrit Renker Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 76fa19b3b056d94dbe5138e9f9b7222b30423aed Author: Ian McDonald Date: Tue Mar 20 14:46:52 2007 -0300 [CCID3]: Fix use of invalid loss intervals This fixes a bug which uses an invalid comparison. The bug resulted in the use of invalid loss intervals. Signed-off-by: Ian McDonald Acked-by: Gerrit Renker Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 41841f3ae91f0bd95de9050bdec89fcc3c721b25 Author: Gerrit Renker Date: Tue Mar 20 14:28:44 2007 -0300 [CCID3]: Use MSS for larger initial windows This improves the slow-start phase by using the MSS (as suggested in RFC 4342, sec. 5) instead of the packet size s. Also figured out that __u32 is ample resource enough. After applying, I got the following in the logs: ccid3_hc_tx_packet_recv: client(f7421700), s=6, MSS=1424, w_init=4380, R_sample=176us, X=24886363 Had the previous variant been used, w_init would have been as low as 24. Committer note: removed unneeded cast to unsigned long long that was causing a compiler warning on 64bit architectures. Signed-off-by: Gerrit Renker Acked-by: Ian McDonald Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 4ee3ac2c56a551e2accb51825f3e165fece29951 Author: Gerrit Renker Date: Tue Mar 20 13:11:24 2007 -0300 [CCID3]: Re-order CCID 3 source file No code change at all. This splits ccid3.c into a RX and a TX section, so that the file has an organisation similar to the other ones (e.g. packet_history.{h,c}). Signed-off-by: Gerrit Renker Acked-by: Ian McDonald Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 930b7dcd56ab1f4d4290a0d795e341d7ea3ca99f Author: Gerrit Renker Date: Tue Mar 20 13:10:15 2007 -0300 [CCID3]: Remove redundant `len' test Since CCID3 avoids sending 0-byte data packets (cf. ccid3_hc_tx_send_packet), testing for zero-payload length, as performed by ccid3_hc_tx_update_s, is redundant - hence removed by this patch. Signed-off-by: Gerrit Renker Acked-by: Ian McDonald Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 5cf8b479a0dbe6a1b38665fd7fdc81158089d74d Author: Gerrit Renker Date: Tue Mar 20 13:08:19 2007 -0300 [DCCP]: Remove ambiguity in the way before48 is used This removes two ambiguities in employing the new definition of before48, following the analysis on http://www.mail-archive.com/dccp@vger.kernel.org/msg01295.html (1) Updating GSR when P.seqno >= S.SWL With the old definition we did not update when P.seqno and S.SWL are 2^47 apart. To ensure the same behaviour as with the old definition, this is replaced with the equivalent condition dccp_delta_seqno(S.SWL, P.seqno) >= 0 (2) Sending SYNC when P.seqno >= S.OSR Here it is debatable whether the new definition causes an ambiguity: the case is similar to (1); and to have consistency with the case (1), we use the equivalent condition dccp_delta_seqno(S.OSR, P.seqno) >= 0 Detailed Justification commit b86047d1d5d159a4b37b6b12d4256a9e37087c99 Author: Gerrit Renker Date: Tue Mar 20 13:03:47 2007 -0300 [DCCP]: Fix for follows48 The follows48 relation identifies whether 48-bit sequence number x is the direct successor of y. Currently, it does not handle cases of the following type correctly: follows48(0x(prefix)10000LL, 0x(prefix)0FFFFLL) where prefix is an arbitrary hex sequence of up to 7 digits. This is fixed by reusing the new dccp_delta_seqno function. Signed-off-by: Gerrit Renker Acked-by: Ian McDonald Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit a607652330c2228e8eb52c1d3abc7026b6e2b714 Author: Gerrit Renker Date: Tue Mar 20 13:00:26 2007 -0300 [DCCP]: Make `before' relation unambiguous Problem: commit 3e92974fb76364d9f3995dce1b574e92245ad288 Author: Gerrit Renker Date: Tue Mar 20 12:45:59 2007 -0300 [DCCP]: Make dccp_delta_seqno return signed numbers Problem: commit b4bead74f90503612e060c55de46c58fe686e6a1 Author: Gerrit Renker Date: Tue Mar 20 12:26:51 2007 -0300 [DCCP]: 48-bit sequence number arithmetic This patch * organizes the sequence arithmetic functions into one corner of dccp.h * performs a small modification of dccp_set_seqno to make it more widely reusable (now it is safe to use any number, since it performs modulo-2^48 assignment) * adds functions and generic macros for 48-bit sequence arithmetic: --48 bit complement --modulo-48 addition and modulo-48 subtraction --dccp_inc_seqno now a special case of add48 Constants renamed following a suggestion by Arnaldo. Signed-off-by: Gerrit Renker Acked-by: Ian McDonald Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 2c5bbd4374b53e462573cbc63e6ac954ba707167 Author: Arnaldo Carvalho de Melo Date: Tue Mar 20 12:08:20 2007 -0300 [FORCEDETH]: Use skb_tailroom where appropriate Reducing the number of skb->data direct accesses. Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit d879d8e48f8dcf7dd5a207919a52899cf9aa4f29 Author: Arnaldo Carvalho de Melo Date: Tue Mar 20 12:00:44 2007 -0300 [LMC]: lmc_main wants to use skb_tailroom At that point it is equivalent to what was being used, skb->end - skb->data, and the need is clearly the one skb_tailroom satisfies. Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit b85118691699738cf43fc7c3a8cf6f6c736c8f8e Author: Arnaldo Carvalho de Melo Date: Tue Mar 20 11:52:34 2007 -0300 [ATM] idt77252: Fix double kfree_skb on failure in push_rx_skb Signed-off-by: Arnaldo Carvalho de Melo commit 5e3c12a36a9bfc336d1f4fa546fe7d93b20fe217 Author: Arnaldo Carvalho de Melo Date: Mon Mar 19 22:29:03 2007 -0300 [SK_BUFF] ipv6: Use skb_network_offset in some more places So that we reduce the number of direct accesses to skb->data. Signed-off-by: Arnaldo Carvalho de Melo commit ca4bf15f0fbd7fc5d6b4a2e9aaa631b0bfd2edf3 Author: Arnaldo Carvalho de Melo Date: Sun Mar 25 23:06:12 2007 -0700 [NETLINK]: Use nlmsg_trim() where appropriate Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 2ff4d9fea419bf075560a0508be79acd26c7eca9 Author: Arnaldo Carvalho de Melo Date: Mon Mar 19 22:28:08 2007 -0300 [NETLINK]: Remove NLMSG_{NEW_ANSWER,CANCEL,END} Not used anywhere and defined inside __KERNEL__, Thomas acked this on irc. Signed-off-by: Arnaldo Carvalho de Melo commit 6035c5270357db02df9bbe9b49bc00228cf3ccb8 Author: Arnaldo Carvalho de Melo Date: Mon Mar 19 22:27:36 2007 -0300 [SK_BUFF]: Remove skb_add_mtu() leftovers Signed-off-by: Arnaldo Carvalho de Melo commit 30386fd55223cdacf06bd4180146c1996bd714ab Author: Arnaldo Carvalho de Melo Date: Mon Mar 19 22:26:45 2007 -0300 [NETLINK]: Introduce nlmsg_hdr() helper For the common "(struct nlmsghdr *)skb->data" sequence, so that we reduce the number of direct accesses to skb->data and for consistency with all the other cast skb member helpers. Signed-off-by: Arnaldo Carvalho de Melo commit 7802c1b3fc0bc1c9402eb3a177cbccf824733586 Author: Robert Olsson Date: Mon Mar 19 16:29:58 2007 -0700 [IPV4]: fib_trie root node settings The threshold for root node can be more aggressive set to get better tree compression. The new setting mekes the root grow from 16 to 19 bits and substansial improvemnt in Aver depth this with the current table of 214393 prefixes But really the dynamic resize should need more investigation both in terms convergence and performance and maybe it should be possible to change... Maybe just for the brave to start with or we may have to back this out. commit 166afd6839ae992772806404fed43b8ad36708ea Author: Robert Olsson Date: Mon Mar 19 16:27:37 2007 -0700 [IPV4]: fib_trie resize break The patch below adds break condition for the resize operations. If we don't achieve the desired fill factor a warning is printed. Trie should still be operational but new thresholds should be considered. Signed-off-by: Robert Olsson Signed-off-by: David S. Miller commit 965b1eff2441f8f3e932a1fecb4dfa9537db9aa9 Author: Arnaldo Carvalho de Melo Date: Mon Mar 19 10:48:59 2007 -0300 [SK_BUFF]: Adjust the zeroing up to tail in __alloc_skb too I did it just in alloc_skb_from_cache, forgot __alloc_skb, fixed now. Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 1a5a3a9b246b7aa76648d1cbe83e7c834a73123a Author: Arnaldo Carvalho de Melo Date: Thu Apr 19 20:43:29 2007 -0700 [SK_BUFF]: Convert skb->end to sk_buff_data_t Now to convert the last one, skb->data, that will allow many simplifications and removal of some of the offset helpers. Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 43105d27dafd74311a28269ed134e5f6a07d6592 Author: David S. Miller Date: Thu Apr 19 20:34:51 2007 -0700 [VLAN] vlan_dev: Use skb_reset_network_header(). Signed-off-by: David S. Miller commit 815ac2f836b8062f5cbf732033fc64c642f2e937 Author: Arnaldo Carvalho de Melo Date: Thu Apr 19 20:29:13 2007 -0700 [SK_BUFF]: Convert skb->tail to sk_buff_data_t So that it is also an offset from skb->head, reduces its size from 8 to 4 bytes on 64bit architectures, allowing us to combine the 4 bytes hole left by the layer headers conversion, reducing struct sk_buff size to 256 bytes, i.e. 4 64byte cachelines, and since the sk_buff slab cache is SLAB_HWCACHE_ALIGN... :-) Many calculations that previously required that skb->{transport,network, mac}_header be first converted to a pointer now can be done directly, being meaningful as offsets or pointers. Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 6aa3e8fe12b6aee86f92ad3506f68c4561cc7c7f Author: Peter Kovar Date: Fri Mar 16 20:39:25 2007 -0700 [IrDA]: SMC SuperIO Chip LPC47N227 not identified properly SMC SuperIO Chip LPC47N227 used for IrDA is not detected because its device identification byte can be 0x7A instead of 0x5A. Patch from Peter Kovar Cc: Jean Delvare Signed-off-by: Andrew Morton Signed-off-by: Samuel Ortiz Signed-off-by: David S. Miller commit 5960dfacff3e0fd0d01d53cb47a86c67c9406edb Author: Samuel Ortiz Date: Fri Mar 16 20:38:23 2007 -0700 [IrDA]: irda lockdep annotation Rmmoding irda triggers a lockdep false positive. Reported-by: Dave Jones Signed-off-by: Samuel Ortiz Signed-off-by: David S. Miller commit 0447b45f1e444732f0e3ef5d1043eca8abe84ccb Author: Samuel Ortiz Date: Fri Mar 16 20:35:25 2007 -0700 [IrDA]: removing stir4200 useless include stir4200 doesn't need to include irlap.h Signed-off-by: Samuel Ortiz Signed-off-by: David S. Miller commit 373bc91a42f7a89e1c8e1a0d47229f72ae1c81fe Author: Arnaldo Carvalho de Melo Date: Tue Apr 10 21:22:35 2007 -0700 [SK_BUFF]: Use offsets for skb->{mac,network,transport}_header on 64bit architectures With this we save 8 bytes per network packet, leaving a 4 bytes hole to be used in further shrinking work, likely with the offsetization of other pointers, such as ->{data,tail,end}, at the cost of adds, that were minimized by the usual practice of setting skb->{mac,nh,n}.raw to a local variable that is then accessed multiple times in each function, it also is not more expensive than before with regards to most of the handling of such headers, like setting one of these headers to another (transport to network, etc), or subtracting, adding to/from it, comparing them, etc. Now we have this layout for sk_buff on a x86_64 machine: [acme@mica net-2.6.22]$ pahole vmlinux sk_buff struct sk_buff { struct sk_buff * next; /* 0 8 */ struct sk_buff * prev; /* 8 8 */ struct rb_node rb; /* 16 24 */ struct sock * sk; /* 40 8 */ ktime_t tstamp; /* 48 8 */ struct net_device * dev; /* 56 8 */ /* --- cacheline 1 boundary (64 bytes) --- */ struct net_device * input_dev; /* 64 8 */ sk_buff_data_t transport_header; /* 72 4 */ sk_buff_data_t network_header; /* 76 4 */ sk_buff_data_t mac_header; /* 80 4 */ /* XXX 4 bytes hole, try to pack */ struct dst_entry * dst; /* 88 8 */ struct sec_path * sp; /* 96 8 */ char cb[48]; /* 104 48 */ /* cacheline 2 boundary (128 bytes) was 24 bytes ago*/ unsigned int len; /* 152 4 */ unsigned int data_len; /* 156 4 */ unsigned int mac_len; /* 160 4 */ union { __wsum csum; /* 4 */ __u32 csum_offset; /* 4 */ }; /* 164 4 */ __u32 priority; /* 168 4 */ __u8 local_df:1; /* 172 1 */ __u8 cloned:1; /* 172 1 */ __u8 ip_summed:2; /* 172 1 */ __u8 nohdr:1; /* 172 1 */ __u8 nfctinfo:3; /* 172 1 */ __u8 pkt_type:3; /* 173 1 */ __u8 fclone:2; /* 173 1 */ __u8 ipvs_property:1; /* 173 1 */ /* XXX 2 bits hole, try to pack */ __be16 protocol; /* 174 2 */ void (*destructor)(struct sk_buff *); /* 176 8 */ struct nf_conntrack * nfct; /* 184 8 */ /* --- cacheline 3 boundary (192 bytes) --- */ struct sk_buff * nfct_reasm; /* 192 8 */ struct nf_bridge_info *nf_bridge; /* 200 8 */ __u16 tc_index; /* 208 2 */ __u16 tc_verd; /* 210 2 */ dma_cookie_t dma_cookie; /* 212 4 */ __u32 secmark; /* 216 4 */ __u32 mark; /* 220 4 */ unsigned int truesize; /* 224 4 */ atomic_t users; /* 228 4 */ unsigned char * head; /* 232 8 */ unsigned char * data; /* 240 8 */ unsigned char * tail; /* 248 8 */ /* --- cacheline 4 boundary (256 bytes) --- */ unsigned char * end; /* 256 8 */ }; /* size: 264, cachelines: 5 */ /* sum members: 260, holes: 1, sum holes: 4 */ /* bit holes: 1, sum bit holes: 2 bits */ /* last cacheline: 8 bytes */ On 32 bits nothing changes, and pointers continue to be used with the compiler turning all this abstraction layer into dust. But there are some sk_buff validation tricks that are now possible, humm... :-) Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 693262d397520c0b121ac4d3db6b8ef898b0dcf4 Author: Arnaldo Carvalho de Melo Date: Tue Apr 10 21:21:55 2007 -0700 [SK_BUFF]: unions of just one member don't get anything done, kill them Renaming skb->h to skb->transport_header, skb->nh to skb->network_header and skb->mac to skb->mac_header, to match the names of the associated helpers (skb[_[re]set]_{transport,network,mac}_header). Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit e81ca7fef63bc8f8d2c01e45420650a888d43e41 Author: Arnaldo Carvalho de Melo Date: Fri Mar 16 17:26:39 2007 -0300 [SK_BUFF]: Introduce skb_network_header_len For the common sequence "skb->h.raw - skb->nh.raw", similar to skb->mac_len, that is precalculated tho, don't think we need to bloat skb with one more member, so just use this new helper, reducing the number of non-skbuff.h references to the layer headers even more. Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 2fd574a98de8b432b8bf26b44ddf79ab7682497d Author: Arnaldo Carvalho de Melo Date: Fri Mar 16 17:19:57 2007 -0300 [SK_BUFF]: Use the helpers to get the layer header pointer Some more cases... Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit dc0a3fbf2c6387fbf35bc16016e8fb2c7b802561 Author: Patrick McHardy Date: Fri Mar 16 12:34:52 2007 -0700 [NET_SCHED]: Fix warning net/sched/sch_api.c: In function 'psched_show': net/sched/sch_api.c:1219: warning: format '%08x' expects type 'unsigned int', but argument 6 has type 's64' Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit eacc5a5ca49a536f348b31b192d7235977741a30 Author: Patrick McHardy Date: Fri Mar 16 12:31:28 2007 -0700 [NET_SCHED]: sch_cbq: fix watchdog scheduled too late q->now is increased during dequeue and doesn't contain the current time afterwards, resulting in a too large timeout value for the qdisc watchdog. Use "now" instead, which still contains the current time. Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit 444213a9f148e6571b2fa790f77dc01476baad83 Author: Patrick McHardy Date: Fri Mar 16 01:23:28 2007 -0700 [NET_SCHED]: Export real timer resolution in /proc/net/psched The timer resolution exported in /proc/net/psched is used by userspace to calculate HTB's burst values. Currently it is set to HZ, since we're now using hrtimers, use KTIME_MONOTONIC_RES, which makes HTB use smaller burst values. This patch also affects libnl, which incorrectly uses this value for the SFQ perturbation parameter, which is always in seconds, and some routing cache values, which are in USER_HZ, so both cases are broken anyway. Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit 5cd538f9cc544438b2473680f4cdcd869cf972d8 Author: Patrick McHardy Date: Fri Mar 16 01:23:02 2007 -0700 [NET_SCHED]: kill jiffie conversion macros Now that all packet schedulers have been converted to hrtimers most users of PSCHED_JIFFIE2US and PSCHED_US2JIFFIE are gone. The remaining users use it to convert external time units to packet scheduler clock ticks, so use PSCHED_TICKS_PER_SEC instead. Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit e7b28fb7fe1fae886e3a48af99298d28579b21d7 Author: Patrick McHardy Date: Fri Mar 16 01:22:39 2007 -0700 [NET_SCHED]: sch_htb: use hrtimer based watchdog Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit 2af1ba3f37900742b590a0aa3f08c79611fb42d2 Author: Patrick McHardy Date: Fri Mar 16 01:22:20 2007 -0700 [NET_SCHED]: sch_cbq: use hrtimer for delay_timer Switch delay_timer to hrtimer. The class penalty parameter is changed to use psched ticks as units. Since iproute never supported using this and the only existing user (libnl) incorrectly assumes psched ticks as units anyway, this shouldn't break anything. Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit 4e64a1e045a533baeb55f3cdb663e4c438546683 Author: Patrick McHardy Date: Fri Mar 16 01:21:40 2007 -0700 [NET_SCHED]: sch_cbq: fix cbq_undelay_prio for non-active priorites cbq_undelay_prio is supposed to return a time delta, but returns the current time for non-active priorities, causing cbq_undelay to mark the priority as active and schedule a timer for twice the current time. Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit 3031137ec1d95c36ec870eb4a583878bb7f3878e Author: Patrick McHardy Date: Fri Mar 16 01:21:11 2007 -0700 [NET_SCHED]: sch_cbq: use hrtimer based watchdog Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit 24f97bd1b26156a3c98615972aaa92d444da32f5 Author: Patrick McHardy Date: Fri Mar 16 01:20:31 2007 -0700 [NET_SCHED]: sch_netem: use hrtimer based watchdog Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit 0554a4c78629422174ff03247abeed2025bf773e Author: Patrick McHardy Date: Fri Mar 16 01:20:07 2007 -0700 [NET_SCHED]: sch_tbf: use hrtimer based watchdog Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit 10454fa97532d9306ce4560a74fc49ee39cd5257 Author: Patrick McHardy Date: Fri Mar 16 01:19:33 2007 -0700 [NET_SCHED]: sch_hfsc: use hrtimer based watchdog Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit b2464dfea7a731cc24d07ab5936cd105adeb6384 Author: Patrick McHardy Date: Fri Mar 16 01:19:15 2007 -0700 [NET_SCHED]: Add hrtimer based qdisc watchdog Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit a4b782bcd49ff39b40665619332baa2d42fcbe71 Author: Patrick McHardy Date: Fri Mar 16 01:18:42 2007 -0700 [NET_SCHED]: Use ktime as clocksource Get rid of the manual clock source selection mess and use ktime. Also use a scalar representation, which allows to clean up pkt_sched.h a bit more and results in less ktime_to_ns() calls in most cases. The PSCHED_US2JIFFIE/PSCHED_JIFFIE2US macros are implemented quite inefficient by this patch, following patches will convert all qdiscs to hrtimers and get rid of them entirely. Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit 09f90770918b92592cbcb5ac4e26d530c5ed8b3a Author: Arnaldo Carvalho de Melo Date: Thu Mar 15 21:42:27 2007 -0300 [SK_BUFF]: Some more layer header conversions Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit dabdb6eca2aaa64962247faa8f916ffc7d382022 Author: Arnaldo Carvalho de Melo Date: Thu Mar 15 21:08:55 2007 -0300 [KBUILD]: Unifdef headers changed by the skb layer header refactorings Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit a3e6e143fa0a605baaf7b673b824b9956bf5e156 Author: Arnaldo Carvalho de Melo Date: Wed Mar 14 21:05:37 2007 -0300 [SK_BUFF]: More skb_put related skb_reset_transport_header This time we have to set it to skb->tail that is not anymore equal to skb->data, so we either add a new helper or just add the skb->tail - skb->data offset, for now do the later. Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 7a864b135154726cc2954842388883ac136a06a0 Author: Arnaldo Carvalho de Melo Date: Wed Mar 14 21:05:03 2007 -0300 [IPV6]: Reset the network header in ip6_nd_hdr ip6_nd_hdr is always called immediately after a alloc_skb + skb_reserve sequence, i.e. when skb->tail is equal to skb->data, making it correct to use skb_reset_network_header(). Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 878c53e64db79e6b36e82c58ddbc72974abb8298 Author: Arnaldo Carvalho de Melo Date: Wed Mar 14 21:04:34 2007 -0300 [SK_BUFF]: More skb_put related conversions to skb_reset_transport_header This is similar to the skb_reset_network_header(), i.e. at the point we reset the transport header pointer/offset skb->tail is equal to skb->data. Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 3f53b038460c078ecfe7a3ae01d0d7364b0ed338 Author: Pablo Neira Ayuso Date: Wed Mar 14 16:45:39 2007 -0700 [NETFILTER]: nfnetlink: parse attributes with nfattr_parse in nfnetlink_check_attribute Use nfattr_parse to parse attributes, this patch also modifies the default behaviour since unknown attributes will be ignored instead of returning EINVAL. This ensure backward compatibility: new libraries with new attributes and old kernels can work. Signed-off-by: Pablo Neira Ayuso Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit c91dbbabbae681e4d2964972947137126e9105c3 Author: Pablo Neira Ayuso Date: Wed Mar 14 16:45:19 2007 -0700 [NETFILTER]: ctnetlink: add support for internal tcp connection tracking flags handling This patch let userspace programs set the IP_CT_TCP_BE_LIBERAL flag to force the pickup of established connections. Signed-off-by: Pablo Neira Ayuso Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit 72432763a70d1ecf33575e4438c39b841a96c3f3 Author: Willy Tarreau Date: Wed Mar 14 16:44:53 2007 -0700 [NETFILTER]: TCP conntrack: factorize out the PUSH flag The PUSH flag is accepted with every other valid combination. Let's get it out of the tcp_valid_flags table and reduce the number of combinations we have to handle. This does not significantly reduce the table size however (8 bytes). Signed-off-by: Willy Tarreau Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit 3fae774866067d5f7bef873ce4fdeab0b855048f Author: Willy Tarreau Date: Wed Mar 14 16:44:31 2007 -0700 [NETFILTER]: TCP conntrack: accept RST|PSH as valid This combination has been encountered on an IBM AS/400 in response to packets sent to a closed session. There is no particular reason to mark it invalid. Signed-off-by: Willy Tarreau Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit b60a44451343d59e095745fb8a4d185d87f96a8a Author: Yasuyuki Kozakai Date: Wed Mar 14 16:44:01 2007 -0700 [NETFILTER]: nf_conntrack: add nf_copy() to safely copy members in skb This unifies the codes to copy netfilter related datas. Before copying, nf_copy() puts original members in destination skb. Signed-off-by: Yasuyuki Kozakai Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit 9b0491eb71a0fd98c7f7337702fa2ba7b805ad36 Author: Yasuyuki Kozakai Date: Wed Mar 14 16:43:37 2007 -0700 [NETFILTER]: nf_conntrack: add __nf_copy() to copy members in skb This unifies the codes to copy netfilter related datas. Note that __nf_copy() assumes destination skb doesn't have any netfilter related members. Signed-off-by: Yasuyuki Kozakai Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit 5ce2ca2f53971edaf071a4f90676267e47e6ee0a Author: Sami Farin Date: Wed Mar 14 16:43:00 2007 -0700 [NETFILTER]: nf_conntrack: use jhash2 in __hash_conntrack Now it uses jhash, but using jhash2 would be around 3-4 times faster (on P4). Signed-off-by: Sami Farin Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit d546b11b9f4a20b39885c10b3b77be18d534905f Author: Patrick McHardy Date: Wed Mar 14 16:42:29 2007 -0700 [JHASH]: Use const in jhash2 Use const to avoid forcing users to cast const data. Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit 2d63f06de48ee82daf1af7f4728104a674761fe5 Author: Pablo Neira Ayuso Date: Wed Mar 14 16:42:11 2007 -0700 [NETFILTER]: nfnetlink: move EXPORT_SYMBOL declarations next to the exported symbol Signed-off-by: Pablo Neira Ayuso Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit 079207375ba7e69f801b51c1b0024d534725e7fb Author: Pablo Neira Ayuso Date: Wed Mar 14 16:41:47 2007 -0700 [NETFILTER]: nfnetlink: remove unused includes in nfnetlink.c Signed-off-by: Pablo Neira Ayuso Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit fc21be19962d80e82e39bba1228bda9d047ee0d1 Author: Pablo Neira Ayuso Date: Wed Mar 14 16:41:28 2007 -0700 [NETFILTER]: nfnetlink: remove unrequired check in nfnetlink_get_subsys subsys_table is initialized to NULL, therefore just returns NULL in case that it is not set. Signed-off-by: Pablo Neira Ayuso Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit 468c15e6f9ddbedb5660d95a724ea412fc64b8b8 Author: Pablo Neira Ayuso Date: Wed Mar 14 16:41:03 2007 -0700 [NETFILTER]: nfnetlink: remove duplicate checks in nfnetlink_check_attributes Remove nfnetlink_check_attributes duplicates message size and callback id checks. nfnetlink_find_client and nfnetlink_rcv_msg already do such checks. Signed-off-by: Pablo Neira Ayuso Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit e0cb12296def519b1743f6e08e7a7e8f88d70a59 Author: Pablo Neira Ayuso Date: Wed Mar 14 16:40:38 2007 -0700 [NETFILTER]: nfnetlink: remove early debugging messages from nfnetlink Signed-off-by: Pablo Neira Ayuso Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit e2f42f06939329f695f13950a10754c838ee8725 Author: Patrick McHardy Date: Wed Mar 14 16:40:10 2007 -0700 [NETFILTER]: nf_conntrack: uninline notifier registration functions Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit c52c30973a421d946c8765e9ae7639bdbf3f716e Author: Patrick McHardy Date: Wed Mar 14 16:39:45 2007 -0700 [NETFILTER]: nfnetlink: use netlink_run_queue() Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit 74d4a3c6b9bbb850e9569b6ec45973e478e64cac Author: Patrick McHardy Date: Wed Mar 14 16:39:25 2007 -0700 [NETFILTER]: nfnetlink: use mutex instead of semaphore Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit 1b9608431309a07b4f9eef623fe9d7e74cadabe7 Author: Patrick McHardy Date: Wed Mar 14 16:39:07 2007 -0700 [NETFILTER]: nf_conntrack: simplify l4 protocol array allocation The retrying after an allocation failure is not necessary anymore since we're holding the mutex the entire time, for the same reason the double allocation race can't happen anymore. Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit 7012ee6a1b5eb14470362389274c3032fc2ff6d5 Author: Patrick McHardy Date: Wed Mar 14 16:38:48 2007 -0700 [NETFILTER]: nf_conntrack: simplify protocol locking Now that we don't use nf_conntrack_lock anymore but a single mutex for all protocol handling, no need to release and grab it again for sysctl registration. Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit 5c5c4a827187f0470d6398ee4efb171d3bae8bc9 Author: Patrick McHardy Date: Wed Mar 14 16:38:25 2007 -0700 [NETFILTER]: nf_conntrack: remove ugly hack in l4proto registration Remove ugly special-casing of nf_conntrack_l4proto_generic, all it wants is its sysctl tables registered, so do that explicitly in an init function and move the remaining protocol initialization and cleanup code to nf_conntrack_proto.c as well. Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit d94f31f053cb81b7c24c3aa1a9f0505db1c47ae6 Author: Patrick McHardy Date: Wed Mar 14 16:37:52 2007 -0700 [NETFILTER]: nf_conntrack: switch protocol registration/unregistration to mutex The protocol lookups done by nf_conntrack are already protected by RCU, there is no need to keep taking nf_conntrack_lock for registration and unregistration. Switch to a mutex. Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit 1a7d613622ab5120d6e1f73637e0430236251b70 Author: Patrick McHardy Date: Wed Mar 14 16:37:25 2007 -0700 [NETFILTER]: Remove IPv4 only connection tracking/NAT Remove the obsolete IPv4 only connection tracking/NAT as scheduled in feature-removal-schedule. Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit 36551f6fb230ed30588b4fb7b4649d4d8029b190 Author: Tobias Klauser Date: Wed Mar 14 16:36:16 2007 -0700 [NETFILTER]: x_tables: remove duplicate of xt_prefix Remove xt_proto_prefix array which duplicates xt_prefix and change all users of xt_proto_prefix to xt_prefix. Signed-off-by: Tobias Klauser Signed-off-by: Patrick McHardy Signed-off-by: David S. Miller commit 8a0289091e05921d587c4188ee2276c454b16994 Author: David S. Miller Date: Thu Apr 19 19:55:44 2007 -0700 [IPV4] xfrm4_mode_beet: Use skb_transport_header(). Signed-off-by: David S. Miller commit bf5ea3fab1fc520c9cfd9228b2abdeb32097e6aa Author: Arnaldo Carvalho de Melo Date: Tue Mar 13 17:20:39 2007 -0300 [SK_BUFF]: Introduce skb_transport_header(skb) For the places where we need a pointer to the transport header, it is still legal to touch skb->h.raw directly if just adding to, subtracting from or setting it to another layer header. Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit b61951b712c484f2c886c0e6c4a1358f29b32ad9 Author: Arnaldo Carvalho de Melo Date: Tue Mar 13 17:17:10 2007 -0300 [SCTP]: Eliminate some pointer attributions to the skb layer headers Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit c20a452ff92ff72fd5e8a1d74e9b248ceec1eb8d Author: Arnaldo Carvalho de Melo Date: Tue Mar 13 17:10:43 2007 -0300 [SK_BUFF]: More skb_reset_transport_header conversions These are a bit more subtle, they are of this type: - skb->h.raw = payload; __skb_pull(skb, payload - skb->data); + skb_reset_transport_header(skb); __skb_pull results in: skb->data = skb->data + payload - skb->data; skb->data = payload; So after __skb_pull we have skb->data pointing to payload and we can just call skb_reset_transport_header(skb), that will do: skb->h.raw = payload; The others are similar, allowing us to get rid of some more cases where a pointer was being attributed to the layer headers. Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 90889a6f8b91432fb5cc5315a7a13725fc2fef39 Author: Arnaldo Carvalho de Melo Date: Tue Apr 10 21:06:25 2007 -0700 [SK_BUFF]: Introduce ipipv6_hdr(), remove skb->h.ipv6h Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 6767395e558aaf1a810d466ad595a9084077d48b Author: Arnaldo Carvalho de Melo Date: Thu Apr 19 19:44:04 2007 -0700 [SK_BUFF]: Introduce ipip_hdr(), remove skb->h.ipiph Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit d2ee6caada63fe8998c24722ac68baa3d649e582 Author: Arnaldo Carvalho de Melo Date: Tue Apr 10 21:04:22 2007 -0700 [SK_BUFF]: Introduce tcp_hdr(), remove skb->h.th Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit b17b090bccab4d0516b22f244a7e76f323852c5d Author: Arnaldo Carvalho de Melo Date: Sun Mar 18 17:43:48 2007 -0700 [TCP]: Introduce tcp_hdrlen() and tcp_optlen() The ip_hdrlen() buddy, created to reduce the number of skb->h.th-> uses and to avoid the longer, open coded equivalent. Ditched a no-op in bnx2 in the process. I wonder if we should have a BUG_ON(skb->h.th->doff < 5) in tcp_optlen()... Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 5b4a78b98a9176b38a9661d31c70c595e2116ecf Author: Arnaldo Carvalho de Melo Date: Tue Mar 13 14:43:18 2007 -0300 [SK_BUFF]: Introduce icmp_hdr(), remove skb->h.icmph Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit f97624385f6bd2473547287db2a598998fa21340 Author: Arnaldo Carvalho de Melo Date: Tue Mar 13 14:28:48 2007 -0300 [SK_BUFF]: Introduce udp_hdr(), remove skb->h.uh Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 59d462fd41c36d7de23b4792a96bce9681ad7003 Author: Arnaldo Carvalho de Melo Date: Tue Mar 13 14:19:23 2007 -0300 [SK_BUFF]: Introduce igmp_hdr() & friends, remove skb->h.igmph Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit e4f47096ddc9b7785f5bade6d83c1b4a03678e9b Author: Arnaldo Carvalho de Melo Date: Tue Mar 13 14:03:22 2007 -0300 [ICMP6]: Introduce icmp6_hdr() For consistency with all the other skb->h.raw accessors. Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 48b738de63670eebe04ef32df03b071edd3b7995 Author: Arnaldo Carvalho de Melo Date: Tue Mar 13 13:59:32 2007 -0300 [SCTP]: Introduce sctp_hdr() For consistency with all the other skb->h.raw accessors. Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 84f2ef495bff120e3eac0252e59e12c1fb753a52 Author: Arnaldo Carvalho de Melo Date: Tue Mar 13 13:51:52 2007 -0300 [SK_BUFF]: Introduce skb_set_transport_header For the cases where the transport header is being set to a offset from skb->data. Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 839c67a4cbe60fee4f5c2d99d97df01e55cc366f Author: Arnaldo Carvalho de Melo Date: Sun Mar 18 17:42:23 2007 -0700 [SK_BUFF]: Introduce skb_transport_offset() For the quite common 'skb->h.raw - skb->data' sequence. Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 797d35661bc46254f0dd2bb086fe4fbd32db36cb Author: Arnaldo Carvalho de Melo Date: Tue Mar 13 13:06:52 2007 -0300 [SK_BUFF]: Introduce skb_reset_transport_header(skb) For the common, open coded 'skb->h.raw = skb->data' operation, so that we can later turn skb->h.raw into a offset, reducing the size of struct sk_buff in 64bit land while possibly keeping it as a pointer on 32bit. This one touches just the most simple cases: skb->h.raw = skb->data; skb->h.raw = {skb_push|[__]skb_pull}() The next ones will handle the slightly more "complex" cases. Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit d0fe21cd5fe244a2d95fdf9c3a95788e827c8594 Author: Arnaldo Carvalho de Melo Date: Thu Apr 19 18:13:28 2007 -0700 [SK_BUFF]: Introduce ipv6_hdr(), remove skb->nh.ipv6h Now the skb->nh union has just one member, .raw, i.e. it is just like the skb->mac union, strange, no? I'm just leaving it like that till the transport layer is done with, when we'll rename skb->mac.raw to skb->mac_header (or ->mac_header_offset?), ditto for ->{h,nh}. Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 65e5b7cc3060f6a6ab7a1da0903b74ee35b5687a Author: Arnaldo Carvalho de Melo Date: Mon Mar 12 20:56:31 2007 -0300 [SK_BUFF]: Introduce arp_hdr(), remove skb->nh.arph Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit f4f493ec81b0d62d358974ef2022812797994ded Author: Stephen Hemminger Date: Mon Mar 12 16:25:32 2007 -0700 [BRIDGE]: faster compare for link local addresses Use logic operations rather than memcmp() to compare destination address with link local multicast addresses. Signed-off-by: Stephen Hemminger Signed-off-by: David S. Miller commit 4f2c4fb5a8cbeaaeb3d1e57aabb200ad92a15001 Author: Arnaldo Carvalho de Melo Date: Fri Apr 20 22:47:35 2007 -0700 [SK_BUFF]: Introduce ip_hdr(), remove skb->nh.iph Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit a9299499b2db8be0b899bb235f1e60021f6a8d54 Author: Arnaldo Carvalho de Melo Date: Mon Mar 12 20:09:36 2007 -0300 [IPMR]: Fix bug introduced when converting to skb_network_reset_header Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit f971d866027255ec70eee1e3aa2cca43c98955a9 Author: Arnaldo Carvalho de Melo Date: Mon Mar 12 20:09:15 2007 -0300 [IP]: Introduce ip_hdrlen() For the common sequence "skb->nh.iph->ihl * 4", removing a good number of open coded skb->nh.iph uses, now to go after the rest... Just out of curiosity, here are the idioms found to get the same result: skb->nh.iph->ihl << 2 skb->nh.iph->ihl<<2 skb->nh.iph->ihl * 4 skb->nh.iph->ihl*4 (skb->nh.iph)->ihl * sizeof(u32) Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 0e5f4eae92432f16d2316f8047f490cb23f37738 Author: Arnaldo Carvalho de Melo Date: Mon Mar 12 20:05:39 2007 -0300 [SK_BUFF] ipmr: Missed one conversion to skb_network_header() We can't access skb->nh.raw directly anymore, it will become an offset. Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 70596e79ade8acd871ace5734e181b49092627fb Author: Stephen Hemminger Date: Mon Mar 12 14:35:37 2007 -0700 [NET]: show bound packet types Show what protocols are bound to what packet types in /proc/net/ptype Uses kallsyms to decode function pointers if possible. Example: Type Device Function ALL eth1 packet_rcv_spkt+0x0 0800 ip_rcv+0x0 0806 arp_rcv+0x0 86dd :ipv6:ipv6_rcv+0x0 Signed-off-by: Stephen Hemminger Signed-off-by: David S. Miller commit 9d7d981760abd0d8a341289b0e596aaf7abb8928 Author: Stephen Hemminger Date: Mon Mar 12 14:34:29 2007 -0700 [NET]: make seq_operations const The seq_file operations stuff can be marked constant to get it out of dirty cache. Signed-off-by: Stephen Hemminger Signed-off-by: David S. Miller commit e924f1306d78902af80b0c8d1baa1ef5711913ff Author: Stephen Hemminger Date: Mon Mar 12 14:33:50 2007 -0700 [NET]: network dev read_mostly For Eric, mark packet type and network device watermarks as read mostly. Signed-off-by: Stephen Hemminger Signed-off-by: David S. Miller commit 33f9dd309b19898417967ee56e9db71df55c41c2 Author: Arnaldo Carvalho de Melo Date: Sun Mar 11 22:39:41 2007 -0300 [SK_BUFF]: Introduce skb_set_network_header For the cases where the network header is being set to a offset from skb->data. Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 271cd1e859158d1711d95c852248ffb506a88230 Author: Arnaldo Carvalho de Melo Date: Sun Mar 11 22:38:29 2007 -0300 [SK_BUFF] ipmr: Another skb_push related conversion to skb_reset_network_header Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 4c29fc5269c2a23057d58f8d654ee189248f99c0 Author: Arnaldo Carvalho de Melo Date: Tue Apr 10 20:50:43 2007 -0700 [SK_BUFF]: Introduce skb_network_header() For the places where we need a pointer to the network header, it is still legal to touch skb->nh.raw directly if just adding to, subtracting from or setting it to another layer header. Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit b081111c7233b4deaf47dc5f86b24633849d5058 Author: Arnaldo Carvalho de Melo Date: Sat Mar 10 22:16:10 2007 -0300 [SK_BUFF]: Introduce skb_network_offset() For the quite common 'skb->nh.raw - skb->data' sequence. Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 617b8aed6634b28a862b52e79ce67c8e77ce22b4 Author: Arnaldo Carvalho de Melo Date: Sat Mar 10 20:09:45 2007 -0300 [SK_BUFF] bonding: Set skb->nh.raw relative to skb->mac.raw Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit c38fa406c063aa7e06390c7c3f0613c707172dc6 Author: Arnaldo Carvalho de Melo Date: Sat Mar 10 19:59:16 2007 -0300 [SK_BUFF] xfrm4: use skb_reset_network_header Setting it to skb->h.raw, which is valid, in the (to become) old pointer based world order and in the new world of offset based layer headers. Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 544523a0e7171ad54e5c5a781d6565bddfdfdf40 Author: Arnaldo Carvalho de Melo Date: Sat Mar 10 19:57:15 2007 -0300 [SK_BUFF] ipv6: More skb_reset_network_header conversions related to skb_pull Now related to this form: skb->nh.ipv6h = (struct ipv6hdr *)skb_put(skb, length); That, as the others, is done when skb->tail is still equal to skb->data, making the conversion to skb_reset_network_header possible. Also one more case equivalent to skb->nh.raw = skb->data, of this form: iph = (struct ipv6hdr *)skb->data; skb->nh.ipv6h = iph; Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 7516b3f1fde7f45f29b1ab81b8ec8bd5ba232835 Author: Arnaldo Carvalho de Melo Date: Sat Mar 10 19:40:39 2007 -0300 [SK_BUFF]: Use skb_reset_network_header after skb_push Some more cases where skb->nh.iph was being set that were converted to using skb_reset_network_header. Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 46d21173482d7bc3cd1a5b09a5c4d203439d47b4 Author: Arnaldo Carvalho de Melo Date: Sat Mar 10 19:27:27 2007 -0300 [SK_BUFF] ipconfig: Another conversion to skb_reset_network_header related to skb_put boot_pkt->iph is the first member, that is at skb->data, so just use skb_reset_network_header(). Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit ee7497906ab59e81c09d538c6be7399cb3cc92da Author: Arnaldo Carvalho de Melo Date: Sat Mar 10 19:15:25 2007 -0300 [SK_BUFF]: Some more skb_put cases converted to skb_reset_network_header Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 1f8775325ea78bcafc26d554b2fec0df7c368164 Author: Arnaldo Carvalho de Melo Date: Sat Mar 10 19:04:55 2007 -0300 [SK_BUFF]: Some more simple skb_reset_network_header conversions This time of the type: skb->nh.iph = (struct iphdr *)skb->data; That is completely equivalent to: skb->nh.raw = skb->data; Wonder why people love casts... :-) Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 820b410a04995868792fea41b18c8aa6dfe4be36 Author: Arnaldo Carvalho de Melo Date: Sat Mar 10 18:42:03 2007 -0300 [SK_BUFF]: Use skb_reset_network_header where the return of __pskb_pull was being used It returns skb->data, so we can just use skb_reset_network_header after it. Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 285ce40f4ec887dc5a675db9a98eea4a2839d761 Author: Arnaldo Carvalho de Melo Date: Sat Mar 10 18:40:59 2007 -0300 [SK_BUFF]: Use skb_reset_network_header where the skb_pull return was being used But only in the cases where its a newly allocated skb, i.e. one where skb->tail is equal to skb->data, or just after skb_reserve, where this requirement is maintained. Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 22671da6f55c325484818085b96c01f6a924dbbc Author: Arnaldo Carvalho de Melo Date: Tue Apr 10 20:46:21 2007 -0700 [SK_BUFF]: Use skb_reset_network_header in skb_push cases skb_push updates and returns skb->data, so we can just call skb_reset_network_header after the call to skb_push. Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 76ed9bd74d5681de531a358509da5cc1f9f0e7d5 Author: Arnaldo Carvalho de Melo Date: Tue Apr 10 20:45:18 2007 -0700 [SK_BUFF]: Introduce skb_reset_network_header(skb) For the common, open coded 'skb->nh.raw = skb->data' operation, so that we can later turn skb->nh.raw into a offset, reducing the size of struct sk_buff in 64bit land while possibly keeping it as a pointer on 32bit. This one touches just the most simple case, next will handle the slightly more "complex" cases. Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit af9c12c94a49210dab6059b190dac768b92043c9 Author: Arnaldo Carvalho de Melo Date: Sat Mar 10 16:21:45 2007 -0300 [IPV6]: Use skb->nh.ipv6h instead of casting skb->nh.raw nh.ipv6h is there exactly for this reason! Use it while it exists ;-) Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 93e811ae9dbc73cc896590a865f89a3538b5b813 Author: Arnaldo Carvalho de Melo Date: Sat Mar 10 16:07:19 2007 -0300 [BONDING]: Introduce arp_pkt() For consistency with all the other skb->nh.raw accessors. Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 82b5c1ad24152f6817e89ef8a176da1c92f500d9 Author: Arnaldo Carvalho de Melo Date: Sat Mar 10 15:56:08 2007 -0300 [PPPOE]: Introduce pppoe_hdr() For consistency with all the other skb->nh.raw accessors. Also do some really obvious simplifications in pppoe_recvmsg, well the kfree_skb one is not so obvious, but free() and kfree() have the same behaviour (hint :-) ). Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 9ca9dec55a2709c33a39032b4b8d793e6f29eeb2 Author: Arnaldo Carvalho de Melo Date: Sat Mar 10 15:34:36 2007 -0300 [LLC]: Kill llc_set_pdu_hdr We'll have skb_reset_network_header soon. Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 5b7dc59b1566b8a7bb95490afa59ea27e5f5180a Author: Arnaldo Carvalho de Melo Date: Mon Mar 19 15:33:04 2007 -0700 [SK_BUFF]: Introduce skb_mac_header() For the places where we need a pointer to the mac header, it is still legal to touch skb->mac.raw directly if just adding to, subtracting from or setting it to another layer header. This one also converts some more cases to skb_reset_mac_header() that my regex missed as it had no spaces before nor after '=', ugh. Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 95c7141c8eccd487ac742a4028a12c59ddbd1af4 Author: Arnaldo Carvalho de Melo Date: Sat Mar 10 12:48:37 2007 -0300 [TCP]: Use skb_set_mac_header in tcp_collapse Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 2e01e5568dff9feace804783472a34868367b70f Author: Arnaldo Carvalho de Melo Date: Sat Mar 10 12:47:22 2007 -0300 [TCP]: Do the layer header setting in tcp_collapse relative to skb->data That is equal to skb->head before skb_reserve, to help in the layer header changes. Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 2afaa1ecc32e6aabe24bbd41158e026cc8575dfe Author: Arnaldo Carvalho de Melo Date: Sat Mar 10 12:40:27 2007 -0300 [SK_BUFF] xfrm: Use skb_set_mac_header in the memmove cases Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 1019995600225dfc049353fb429a7153835fc75c Author: Arnaldo Carvalho de Melo Date: Sat Mar 10 12:30:58 2007 -0300 [SK_BUFF]: Introduce skb_set_mac_header() For the cases where we want to set skb->mac.raw to an offset from skb->data. Simple cases first, the memmove ones and specially pktgen will be left for later. Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 5fc356110d247778e2af7636f470206c61081b77 Author: Arnaldo Carvalho de Melo Date: Sat Mar 10 12:17:29 2007 -0300 [LLC]: Use skb_reset_mac_header in llc_mac_hdr_init skb_push updates and returns skb->data, so we can just call skb_reset_mac_header after the call to skb_push. Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit ff0208acda8bc5d52f9d1e10b523811e898688cc Author: Arnaldo Carvalho de Melo Date: Sat Mar 10 12:14:56 2007 -0300 [LLC]: Use skb_reset_mac_header in llc_alloc_frame skb->head is equal to skb->data after alloc_skb, so reset the mac header while this is true, i.e. before skb_reserve. Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit eea8068329ce3db26ae0cad6b5c9d6df3d87184c Author: Arnaldo Carvalho de Melo Date: Mon Mar 19 15:30:44 2007 -0700 [SK_BUFF]: Introduce skb_reset_mac_header(skb) For the common, open coded 'skb->mac.raw = skb->data' operation, so that we can later turn skb->mac.raw into a offset, reducing the size of struct sk_buff in 64bit land while possibly keeping it as a pointer on 32bit. This one touches just the most simple case, next will handle the slightly more "complex" cases. Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 6514f22d71096e2f1102e8c31563f943cce462c0 Author: Arnaldo Carvalho de Melo Date: Sat Mar 10 11:20:07 2007 -0300 [AOE]: Introduce aoe_hdr() For consistency with other skb->mac.raw users. Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 7bf84275320b2060ef8988f2801a3e92d236f59d Author: Arnaldo Carvalho de Melo Date: Sat Mar 10 11:13:59 2007 -0300 [QETH]: Use eth_hdr() Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 45a63344c5061903a2836b1cf1c911305462828a Author: Arnaldo Carvalho de Melo Date: Sat Mar 10 10:57:13 2007 -0300 [HIPPI/FDDI]: Make {hippi,fddi}_type_trans set skb->dev Now all the _type_trans routines are consistent in this regard. Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 3c2ca07f58b34f5ef9d49dd0b39f4e7ab53228fc Author: Arnaldo Carvalho de Melo Date: Tue Apr 10 20:13:25 2007 -0700 [ETH]: Make eth_type_trans set skb->dev like the other *_type_trans One less thing for drivers writers to worry about. Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 73f16335c82df2bd407b64d1f22879b50a8530e0 Author: Arnaldo Carvalho de Melo Date: Mon Mar 19 15:29:16 2007 -0700 [TR]: Make tr_type_trans set skb->dev Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 4200f504f05eeaa98d1e5d2fed025f0c72f72ec0 Author: Arnaldo Carvalho de Melo Date: Mon Mar 19 15:27:07 2007 -0700 [TR]: Use tr_hdr() were appropriate Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 2375519ab2c5520f803590602a92ece11a84b5a5 Author: Arnaldo Carvalho de Melo Date: Sat Mar 10 00:39:35 2007 -0300 [SOCKET]: Export __sock_recv_timestamp Kernel: arch/x86_64/boot/bzImage is ready (#2) MODPOST 1816 modules WARNING: "__sock_recv_timestamp" [net/sctp/sctp.ko] undefined! WARNING: "__sock_recv_timestamp" [net/packet/af_packet.ko] undefined! WARNING: "__sock_recv_timestamp" [net/key/af_key.ko] undefined! WARNING: "__sock_recv_timestamp" [net/ipv6/ipv6.ko] undefined! WARNING: "__sock_recv_timestamp" [net/atm/atm.ko] undefined! make[2]: *** [__modpost] Error 1 make[1]: *** [modules] Error 2 make: *** [_all] Error 2 Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 27bbddae6ef9375a1bc350e507b5249465dd63b9 Author: Eric Dumazet Date: Sun Mar 25 22:14:49 2007 -0700 [NET]: Adding SO_TIMESTAMPNS / SCM_TIMESTAMPNS support Now that network timestamps use ktime_t infrastructure, we can add a new SOL_SOCKET sockopt SO_TIMESTAMPNS. This command is similar to SO_TIMESTAMP, but permits transmission of a 'timespec struct' instead of a 'timeval struct' control message. (nanosecond resolution instead of microsecond) Control message is labelled SCM_TIMESTAMPNS instead of SCM_TIMESTAMP A socket cannot mix SO_TIMESTAMP and SO_TIMESTAMPNS : the two modes are mutually exclusive. sock_recv_timestamp() became too big to be fully inlined so I added a __sock_recv_timestamp() helper function. Signed-off-by: Eric Dumazet CC: linux-arch@vger.kernel.org Signed-off-by: David S. Miller commit 61e8239944be7e309b8f405af73665bf719a7370 Author: Arnaldo Carvalho de Melo Date: Fri Mar 9 13:51:54 2007 -0800 [UDP]: Use __skb_pull since we have checked it won't fail with pskb_may_pull Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: David S. Miller commit 3dbb997dbf4c78fefec369e494d23d3342144585 Author: Eric Dumazet Date: Thu Mar 8 22:36:37 2007 -0800 [NET]: New sysctls should use __read_mostly tags net_msg_warn should be placed in the read_mostly section, to avoid performance problems on SMP Signed-off-by: Eric Dumazet Signed-off-by: David S. Miller commit e27aec9f7ceb9e7529a67ce9578e46e0c32bc4d7 Author: YOSHIFUJI Hideaki Date: Thu Mar 8 20:48:23 2007 -0800 [IPV6]: Ensure to truncate result and return full length for sticky options. Bug noticed by Chris Wright . Signed-off-by: YOSHIFUJI Hideaki Signed-off-by: David S. Miller commit 19d3006e46a91dd36cb58b0ff40f3307f744bfa0 Author: YOSHIFUJI Hideaki Date: Sun Mar 18 17:35:57 2007 -0700 [IPV6]: Return correct result for sticky options. We returned incorrect result with IPV6_RTHDRDSTOPTS, IPV6_RTHDR and IPV6_DSTOPTS. Signed-off-by: YOSHIFUJI Hideaki Signed-off-by: David S. Miller commit 1b1d674da92aae18c76775aabff7b3ee63b6cfd1 Author: Stephen Hemminger Date: Thu Mar 8 20:46:41 2007 -0800 [UDP]: deinline A couple of functions are exported or used indirectly so it is pointless to mark them as inline. Signed-off-by: Stephen Hemminger Signed-off-by: David S. Miller commit aa6a4c72445a82193fc3fd351fd97831be4b3050 Author: Stephen Hemminger Date: Thu Mar 8 20:46:03 2007 -0800 [NET]: deinline some functions Several functions are marked inline or forced inline, but it would be better to let the compiler decide. Signed-off-by: Stephen Hemminger Signed-off-by: David S. Miller commit ef78a87215b7926527959806c4e90a108fd90be0 Author: Stephen Hemminger Date: Thu Mar 8 20:45:19 2007 -0800 [TCP]: whitespace cleanup Add whitespace around keywords. Signed-off-by: Stephen Hemminger Signed-off-by: David S. Miller commit 6c619740094f8b22b1c56a4977cdf71857ba4f00 Author: Stephen Hemminger Date: Thu Mar 8 20:44:43 2007 -0800 [IPV4]: cleanup Add whitespace around keywords. Signed-off-by: Stephen Hemminger Signed-off-by: David S. Miller commit 5246a94c59d7322921f75ddbf5de5269ece09eff Author: Stephen Hemminger Date: Thu Mar 8 20:43:49 2007 -0800 [WIRELESS]: use ARRAY_SIZE() Use ARRAY_SIZE() macro now. Signed-off-by: Stephen Hemminger Signed-off-by: David S. Miller commit f679510819dee2761cf373d657f9197a596cffe5 Author: Stephen Hemminger Date: Tue Apr 10 20:10:33 2007 -0700 [NET] core: whitespace cleanup Fix whitespace around keywords. Fix indentation especially of switch statements. Signed-off-by: Stephen Hemminger Signed-off-by: David S. Miller commit af04162e12a95357c2a99def5b6c4e9d0b10a712 Author: Stephen Hemminger Date: Thu Mar 8 20:42:35 2007 -0800 [UDP]: ipv6 style cleanup Fix whitespace around keywords. Eliminate unnecessary ()'s on return statements. Signed-off-by: Stephen Hemminger Signed-off-by: David S. Miller commit e09844992d225c33e275b07b7b88ddf85b4bc6d7 Author: Stephen Hemminger Date: Thu Mar 8 20:41:55 2007 -0800 [UDP]: ipv4 whitespace cleanup Fix whitespace around keywords. Signed-off-by: Stephen Hemminger Signed-off-by: David S. Miller commit 3c8f1ad120c8453397af3237466595552dc8e908 Author: Stephen Hemminger Date: Thu Mar 8 20:41:08 2007 -0800 [NET]: Replace CONFIG_NET_DEBUG with sysctl. Covert network warning messages from a compile time to runtime choice. Removes kernel config option and replaces it with new /proc/sys/net/core/warnings. Signed-off-by: Stephen Hemminger Signed-off-by: David S. Miller commit d7a791f19f478ff54a7aa9d13d9e063825fb89a6 Author: Eric Dumazet Date: Sun Mar 18 17:33:16 2007 -0700 [NET]: Introduce SIOCGSTAMPNS ioctl to get timestamps with nanosec resolution Now network timestamps use ktime_t infrastructure, we can add a new ioctl() SIOCGSTAMPNS command to get timestamps in 'struct timespec'. User programs can thus access to nanosecond resolution. Signed-off-by: Eric Dumazet CC: Stephen Hemminger Signed-off-by: David S. Miller commit 53c234b976f8353a0db05931c10088ab70d962e8 Author: Adrian Bunk Date: Wed Mar 7 19:33:52 2007 -0800 [TCP/DCCP/RANDOM]: Remove unused exports. This patch removes the following not or no longer used exports: - drivers/char/random.c: secure_tcp_sequence_number - net/dccp/options.c: sysctl_dccp_feat_sequence_window - net/netlink/af_netlink.c: netlink_set_err Signed-off-by: Adrian Bunk Signed-off-by: David S. Miller commit 9ab3c9b30834a935e898012eeb340c1b5ebfd158 Author: David S. Miller Date: Wed Mar 7 12:12:44 2007 -0800 [TCP]: Abstract out all write queue operations. This allows the write queue implementation to be changed, for example, to one which allows fast interval searching. Signed-off-by: David S. Miller commit a048b6515f9893a22a3536e034405a7a6354e00f Author: YOSHIFUJI Hideaki Date: Wed Mar 7 14:21:31 2007 +0900 [NET] TIPC: Use htons() where appropriate. Signed-off-by: YOSHIFUJI Hideaki Signed-off-by: David S. Miller commit a3fd37b077474f33e0776e78d1d5fa61edca7a60 Author: YOSHIFUJI Hideaki Date: Wed Mar 7 14:21:20 2007 +0900 [NET] SCHED: Use htons() where appropriate. Signed-off-by: YOSHIFUJI Hideaki Signed-off-by: David S. Miller commit a8e81efcd82be4701ba30b97d38b1e43187087ab Author: YOSHIFUJI Hideaki Date: Wed Mar 7 14:21:00 2007 +0900 [NET] NETFILTER: Use htonl() where appropriate. Signed-off-by: YOSHIFUJI Hideaki Signed-off-by: David S. Miller commit 7f1a301871f224d0d7ac5b90297efe65e76164a7 Author: YOSHIFUJI Hideaki Date: Wed Mar 7 14:19:10 2007 +0900 [NET] IPV4: Use hton{s,l}() where appropriate. Signed-off-by: YOSHIFUJI Hideaki Signed-off-by: David S. Miller commit 6bbcd39fbf3cd7fd56d30d65868c1c0d241b37ec Author: YOSHIFUJI Hideaki Date: Wed Mar 7 14:19:05 2007 +0900 [NET] IEEE80211: Use htons() where appropriate. Signed-off-by: YOSHIFUJI Hideaki Signed-off-by: David S. Miller commit 187e6e80fd3e180d8fb0b1c634c2ccd5524d42c0 Author: YOSHIFUJI Hideaki Date: Wed Mar 7 14:19:03 2007 +0900 [NET] ETHERNET: Use htons() where appropriate. Signed-off-by: YOSHIFUJI Hideaki Signed-off-by: David S. Miller commit 0fb377deccddc0b3baee5a2b0b51ccbd08ee0027 Author: YOSHIFUJI Hideaki Date: Sun Mar 25 20:13:04 2007 -0700 [NET] CORE: Use htons() where appropriate. Signed-off-by: YOSHIFUJI Hideaki Signed-off-by: David S. Miller commit 924a54c05f9fa64ebe370d582ced7e4dc0a9d69b Author: YOSHIFUJI Hideaki Date: Sun Mar 25 20:12:50 2007 -0700 [NET] BLUETOOTH: Use cpu_to_le{16,32}() where appropriate. Signed-off-by: YOSHIFUJI Hideaki Signed-off-by: David S. Miller commit 4c3327e5542868de330a8c207204028b769ec4b5 Author: YOSHIFUJI Hideaki Date: Sun Mar 25 20:12:32 2007 -0700 [NET] ATM: Use htons() where appropriate. Signed-off-by: YOSHIFUJI Hideaki Signed-off-by: David S. Miller commit 9765d64976c932126bb280e54ef42dfccaca51f0 Author: YOSHIFUJI Hideaki Date: Sun Mar 25 20:12:18 2007 -0700 [NET] 8021Q: Use htons() where appropriate. Signed-off-by: YOSHIFUJI Hideaki Signed-off-by: David S. Miller commit 869ce99159be407923067fe4aeda563690399a88 Author: YOSHIFUJI Hideaki Date: Sun Mar 25 20:11:55 2007 -0700 [NET] 802: Use hton{s,l}() where appropriate. Signed-off-by: YOSHIFUJI Hideaki Signed-off-by: David S. Miller commit 9f565eb6b6603c2153dff0da82a6fd8b32a3899b Author: Herbert Xu Date: Sun Mar 25 20:10:56 2007 -0700 [UDP]: Clean up UDP-Lite receive checksum This patch eliminates some duplicate code for the verification of receive checksums between UDP-Lite and UDP. It does this by introducing __skb_checksum_complete_head which is identical to __skb_checksum_complete_head apart from the fact that it takes a length parameter rather than computing the first skb->len bytes. As a result UDP-Lite will be able to use hardware checksum offload for packets which do not use partial coverage checksums. It also means that UDP-Lite loopback no longer does unnecessary checksum verification. If any NICs start support UDP-Lite this would also start working automatically. This patch removes the assumption that msg_flags has MSG_TRUNC clear upon entry in recvmsg. Signed-off-by: Herbert Xu Signed-off-by: David S. Miller commit 0d3da1dfa445fd1a3448bc43c8a87b3b90c7c67b Author: Herbert Xu Date: Tue Mar 6 20:29:58 2007 -0800 [UDP6]: Restore sk_filter optimisation This reverts the changeset [IPV6]: UDPv6 checksum. We always need to check UDPv6 checksum because it is mandatory. The sk_filter optimisation has nothing to do whether we verify the checksum. It simply postpones it to the point when the user calls recv or poll. Signed-off-by: Herbert Xu Signed-off-by: David S. Miller commit 0e1d5bd08c4dece188df6d66f0a9bbc963e6abda Author: Eric Dumazet Date: Tue Mar 6 20:23:10 2007 -0800 [IPV4]: Optimize inet_getpeer() 1) Some sysctl vars are declared __read_mostly 2) We can avoid updating stack[] when doing an AVL lookup only. lookup() macro is extended to receive a second parameter, that may be NULL in case of a pure lookup (no need to save the AVL path). This removes unnecessary instructions, because compiler knows if this _stack parameter is NULL or not. text size of net/ipv4/inetpeer.o is 2063 bytes instead of 2107 on x86_64 Signed-off-by: Eric Dumazet Signed-off-by: David S. Miller commit 04fb551ff75376d4f89234dc3a869e7a54d0a929 Author: Stephen Hemminger Date: Tue Mar 6 20:21:20 2007 -0800 [TCP] TCP Yeah: cleanup Eliminate need for full 6/4/64 divide to compute queue. Variable maxqueue was really a constant. Fix indentation. Signed-off-by: Stephen Hemminger Signed-off-by: David S. Miller commit 1beb7ca239509dda42cbe205fe79785f9f1b681e Author: Stephen Hemminger Date: Sun Mar 25 20:21:15 2007 -0700 [TCP] tcp_cubic: faster cube root The Newton-Raphson method is quadratically convergent so only a small fixed number of steps are necessary. Therefore it is faster to unroll the loop. Since div64_64 is no longer inline it won't cause code explosion. Also fixes a bug that can occur if x^2 was bigger than 32 bits. Signed-off-by: Stephen Hemminger Signed-off-by: David S. Miller commit 1638810600e3bbf8f6035f1a5b399edb07bab7fe Author: YOSHIFUJI Hideaki Date: Tue Mar 6 20:19:26 2007 -0800 [ATM] ENI: Convert to struct timeval to ktime_t. Signed-off-by: YOSHIFUJI Hideaki Signed-off-by: David S. Miller commit e508f8119c6683735364700c2d02bc8566d076db Author: David S. Miller Date: Sun Mar 25 20:27:59 2007 -0700 [NETLINK]: Limit NLMSG_GOODSIZE to 8K. Signed-off-by: David S. Miller commit 80e7c60f86a29541206972221f53a7b4fda909e8 Author: YOSHIFUJI Hideaki Date: Wed Feb 28 23:13:20 2007 +0900 [IPV6] ADDRCONF: Fix possible inet6_ifaddr leakage with CONFIG_OPTIMISTIC_DAD. The inet6_ifaddr for source address of RS is leaked if the address is not an optimistic address. Signed-off-by: YOSHIFUJI Hideaki Signed-off-by: David S. Miller commit 1247d4ef85506a3a48788db02f927cc9a7ed2dfb Author: Neil Horman Date: Thu Feb 22 00:18:49 2007 +0900 [IPV6] ADDRCONF: Optimistic Duplicate Address Detection (RFC 4429) Support. Nominally an autoconfigured IPv6 address is added to an interface in the Tentative state (as per RFC 2462). Addresses in this state remain in this state while the Duplicate Address Detection process operates on them to determine their uniqueness on the network. During this period, these tentative addresses may not be used for communication, increasing the time before a node may be able to communicate on a network. Using Optimistic Duplicate Address Detection, autoconfigured addresses may be used immediately for communication on the network, as long as certain rules are followed to avoid conflicts with other nodes during the Duplicate Address Detection process. Signed-off-by: Neil Horman Signed-off-by: YOSHIFUJI Hideaki Signed-off-by: David S. Miller commit 5bbcddd4dae0a65e47eb161272cccb812bc3822f Author: Yasuyuki Kozakai Date: Thu Nov 30 14:43:28 2006 +0900 [IPV6] IP6TUNNEL: Enable to control the handled inner protocol. ip6_tunnel before supporting IPv4/IPv6 tunnel allows only IPPROTO_IPV6 in configurations from userland. This allows userland to set IPPROTO_IPIP and 0(wildcard). ip6_tunnel only handles allowed inner protocols. Signed-off-by: Yasuyuki Kozakai Signed-off-by: YOSHIFUJI Hideaki Signed-off-by: David S. Miller commit 1f755a9b3ec52909f636f126c630a6c2baba6a25 Author: Yasuyuki Kozakai Date: Sat Feb 10 00:30:33 2007 +0900 [IPV6] IP6TUNNEL: Rename functions ip6ip6_* to ip6_tnl_*. Signed-off-by: Yasuyuki Kozakai Signed-off-by: YOSHIFUJI Hideaki Signed-off-by: David S. Miller commit 5b53e0af32a5fde0bbf068b454d092cbbfd7cb66 Author: Yasuyuki Kozakai Date: Thu Feb 15 00:43:16 2007 +0900 [IPV6] IP6TUNNEL: Add support to IPv4 over IPv6 tunnel. Some notes - Protocol number IPPROTO_IPIP is used for IPv4 over IPv6 packets. - If IP6_TNL_F_USE_ORIG_TCLASS is set, TOS in IPv4 header is copied to Traffic Class in outer IPv6 header on xmit. - IP6_TNL_F_USE_ORIG_FLOWLABEL is ignored on xmit of IPv4 packets, because IPv4 header does not have flow label. - Kernel sends ICMP error if IPv4 packet is too big on xmit, even if DF flag is not set. Signed-off-by: Yasuyuki Kozakai Signed-off-by: YOSHIFUJI Hideaki Signed-off-by: David S. Miller commit 3156425fd2985fbf5d1fe6e65275271ea3b6ec45 Author: Yasuyuki Kozakai Date: Sun Nov 5 22:56:45 2006 +0900 [IPV6] IP6TUNNEL: Split out generic routine in ip6ip6_xmit(). This enables to add IPv4/IPv6 specific handling later, Signed-off-by: Yasuyuki Kozakai Signed-off-by: YOSHIFUJI Hideaki Signed-off-by: David S. Miller commit fe3ae4c6802ff0afb34247d80c2065c3aaac909f Author: Yasuyuki Kozakai Date: Fri Nov 3 09:39:14 2006 +0900 [IPV6] IP6TUNNEL: Split out generic routine in ip6ip6_rcv(). This enables to add IPv4/IPv6 specific handling later, Signed-off-by: Yasuyuki Kozakai Signed-off-by: YOSHIFUJI Hideaki Signed-off-by: David S. Miller commit e6216adc5c6195255bcb862a69983f0092228aaf Author: Yasuyuki Kozakai Date: Tue Oct 31 23:11:25 2006 +0900 [IPV6] IP6TUNNEL: Split out generic routine in ip6ip6_err(). This enables to add IPv4/IPv6 specific error handling later, Signed-off-by: Yasuyuki Kozakai Signed-off-by: YOSHIFUJI Hideaki Signed-off-by: David S. Miller commit d699fd9bec6ea6216769c4266a567cf2d2f73655 Author: YOSHIFUJI Hideaki Date: Thu Feb 22 22:05:40 2007 +0900 [IPV6]: Decentralize EXPORT_SYMBOLs. Signed-off-by: YOSHIFUJI Hideaki commit ff22facacaf93d56f39259dada41c05b00e1501a Author: David S. Miller Date: Tue Mar 6 17:02:35 2007 -0800 [NETLINK]: Mirror UDP MSG_TRUNC semantics. If the user passes MSG_TRUNC in via msg_flags, return the full packet size not the truncated size. Idea from Herbert Xu and Thomas Graf. Signed-off-by: David S. Miller commit 7464f2e54492d799a20b906e1252de1600404b17 Author: Eric Dumazet Date: Thu Apr 19 16:16:32 2007 -0700 [NET]: convert network timestamps to ktime_t We currently use a special structure (struct skb_timeval) and plain 'struct timeval' to store packet timestamps in sk_buffs and struct sock. This has some drawbacks : - Fixed resolution of micro second. - Waste of space on 64bit platforms where sizeof(struct timeval)=16 I suggest using ktime_t that is a nice abstraction of high resolution time services, currently capable of nanosecond resolution. As sizeof(ktime_t) is 8 bytes, using ktime_t in 'struct sock' permits a 8 byte shrink of this structure on 64bit architectures. Some other structures also benefit from this size reduction (struct ipq in ipv4/ip_fragment.c, struct frag_queue in ipv6/reassembly.c, ...) Once this ktime infrastructure adopted, we can more easily provide nanosecond resolution on top of it. (ioctl SIOCGSTAMPNS and/or SO_TIMESTAMPNS/SCM_TIMESTAMPNS) Note : this patch includes a bug correction in compat_sock_get_timestamp() where a "err = 0;" was missing (so this syscall returned -ENOENT instead of 0) Signed-off-by: Eric Dumazet CC: Stephen Hemminger CC: John find Signed-off-by: David S. Miller commit e59227be300e9c826645a9bbf6b7ff52161a1c30 Author: Stephen Hemminger Date: Sun Mar 25 19:54:23 2007 -0700 [NET]: div64_64 consolidate (rev3) Here is the current version of the 64 bit divide common code. Signed-off-by: Stephen Hemminger Signed-off-by: David S. Miller commit 9fda596002e28b4501acb97cf04fa9feac8cc271 Author: James Morris Date: Sun Mar 4 16:12:44 2007 -0800 [NET]: Convert xtime.tv_sec to get_seconds() Where appropriate, convert references to xtime.tv_sec to the get_seconds() helper function. Signed-off-by: James Morris Signed-off-by: David S. Miller commit b37c6e24be9843804a002a197df48b75369b119e Author: Stephen Hemminger Date: Sun Mar 4 16:11:51 2007 -0800 [PKTGEN]: fix device name handling Since devices can change name and other wierdness, don't hold onto a copy of device name, instead use pointer to output device. Fix a couple of leaks in error handling path as well. Signed-off-by: Stephen Hemminger Signed-off-by: Robert Olsson Signed-off-by: David S. Miller commit ffc10f8a99c2fdb5a0d0bd99913dbee3e1c07192 Author: Stephen Hemminger Date: Sun Mar 4 16:08:08 2007 -0800 [PKTGEN]: don't use __constant_htonl() The existing htonl() macro is smart enough to do the same code as using __constant_htonl() and it looks cleaner. Signed-off-by: Stephen Hemminger Signed-off-by: Robert Olsson Signed-off-by: David S. Miller commit cb280e5d3a36abf9eb8965d252c8c1439745dc21 Author: Stephen Hemminger Date: Sun Mar 4 16:07:28 2007 -0800 [PKTGEN]: use random32 Can use random32() now. Signed-off-by: Stephen Hemminger Signed-off-by: Robert Olsson Signed-off-by: David S. Miller commit 4dc37d211cc6be91b7c4a134787ec1fbde2ee716 Author: Stephen Hemminger Date: Sun Mar 4 16:06:47 2007 -0800 [PKTGEN]: use pr_debug Remove private debug macro and replace with standard version Signed-off-by: Stephen Hemminger Signed-off-by: Robert Olsson Signed-off-by: David S. Miller commit 6d121134fa044f510986a4d981acb325a2c564da Author: Eric Dumazet Date: Sun Mar 4 16:05:44 2007 -0800 [NET]: Keep sk_backlog near sk_lock sk_backlog is a critical field of struct sock. (known famous words) It is (ab)used in hot paths, in particular in release_sock(), tcp_recvmsg(), tcp_v4_rcv(), sk_receive_skb(). It really makes sense to place it next to sk_lock, because sk_backlog is only used after sk_lock locked (and thus memory cache line in L1 cache). This should reduce cache misses and sk_lock acquisition time. (In theory, we could only move the head pointer near sk_lock, and leaving tail far away, because 'tail' is normally not so hot, but keep it simple :) ) Signed-off-by: Eric Dumazet Signed-off-by: David S. Miller commit 1e79a04ef36f97216b3820e90206b094f90d7f3a Author: Ilpo Järvinen Date: Fri Mar 2 13:34:19 2007 -0800 [TCP]: FRTO undo response falls back to ratehalving one if ECEd Undoing ssthresh is disabled in fastretrans_alert whenever FLAG_ECE is set by clearing prior_ssthresh. The clearing does not protect FRTO because FRTO operates before fastretrans_alert. Moving the clearing of prior_ssthresh earlier seems to be a suboptimal solution to the FRTO case because then FLAG_ECE will cause a second ssthresh reduction in try_to_open (the first occurred when FRTO was entered). So instead, FRTO falls back immediately to the rate halving response, which switches TCP to CA_CWR state preventing the latter reduction of ssthresh. If the first ECE arrived before the ACK after which FRTO is able to decide RTO as spurious, prior_ssthresh is already cleared. Thus no undoing for ssthresh occurs. Besides, FLAG_ECE should be set also in the following ACKs resulting in rate halving response that sees TCP is already in CA_CWR, which again prevents an extra ssthresh reduction on that round-trip. If the first ECE arrived before RTO, ssthresh has already been adapted and prior_ssthresh remains cleared on entry because TCP is in CA_CWR (the same applies also to a case where FRTO is entered more than once and ECE comes in the middle). High_seq must not be touched after tcp_enter_cwr because CWR round-trip calculation depends on it. I believe that after this patch, FRTO should be ECN-safe and even able to take advantage of synergy benefits. Signed-off-by: Ilpo Järvinen Signed-off-by: David S. Miller commit 266eb6586e5dc813f54098c76f3092fe1b465b3d Author: Ilpo Järvinen Date: Fri Mar 2 13:27:25 2007 -0800 [TCP]: Complete icsk-to-local-variable change (in tcp_enter_cwr) A local variable for icsk was created but this change was missing. Spotted by Jarek Poplawski. Signed-off-by: Ilpo Järvinen Signed-off-by: David S. Miller commit 17150c1494646c9389bd1b718ae389def258d4a5 Author: Ilpo Järvinen Date: Tue Feb 27 10:10:55 2007 -0800 [TCP] Sysctl documentation: tcp_frto_response In addition, fixed minor things in tcp_frto sysctl. Signed-off-by: Ilpo Järvinen Signed-off-by: David S. Miller commit d003a464e194ea6cda6d7bd6474c5146c0da6343 Author: Ilpo Järvinen Date: Tue Feb 27 10:09:49 2007 -0800 [TCP]: Add two new spurious RTO responses to FRTO New sysctl tcp_frto_response is added to select amongst these responses: - Rate halving based; reuses CA_CWR state (default) - Very conservative; used to be the only one available (=1) - Undo cwr; undoes ssthresh and cwnd reductions (=2) The response with rate halving requires a new parameter to tcp_enter_cwr because FRTO has already reduced ssthresh and doing a second reduction there has to be prevented. In addition, to keep things nice on 80 cols screen, a local variable was added. Signed-off-by: Ilpo Järvinen Signed-off-by: David S. Miller commit cb4a76b23660ba76a518ce52baf115e7f77ef198 Author: Ilpo Järvinen Date: Fri Feb 23 16:22:06 2007 -0800 [TCP]: Correct reordering detection change (no FRTO case) The reordering detection must work also when FRTO has not been used at all which was the original intention of mine, just the expression of the idea was flawed. Signed-off-by: Ilpo Järvinen Signed-off-by: David S. Miller commit 75b884c827cd6b4e78d56573266a50bf72968f8c Author: David S. Miller Date: Thu Feb 22 22:52:59 2007 -0800 [TCP]: Make snd_cwnd_clamp a u32. Signed-off-by: David S. Miller commit ca050cf1cd1c978858b8ccc3fd882cbaf1e66d5e Author: Eric Dumazet Date: Thu Feb 22 03:20:44 2007 -0800 [TCP]: Keep copied_seq, rcv_wup and rcv_next together. I noticed in oprofile study a cache miss in tcp_rcv_established() to read copied_seq. ffffffff80400a80 : /* tcp_rcv_established total: 4034293   2.0400 */  55493  0.0281 :ffffffff80400bc9:   mov    0x4c8(%r12),%eax copied_seq 543103  0.2746 :ffffffff80400bd1:   cmp    0x3e0(%r12),%eax   rcv_nxt     if (tp->copied_seq == tp->rcv_nxt &&         len - tcp_header_len <= tp->ucopy.len) { In this function, the cache line 0x4c0 -> 0x500 is used only for this reading 'copied_seq' field. rcv_wup and copied_seq should be next to rcv_nxt field, to lower number of active cache lines in hot paths. (tcp_rcv_established(), tcp_poll(), ...) As you suggested, I changed tcp_create_openreq_child() so that these fields are changed together, to avoid adding a new store buffer stall. Patch is 64bit friendly (no new hole because of alignment constraints) Signed-off-by: Eric Dumazet Signed-off-by: David S. Miller commit 9788ae182b66d3dc12dfc5ecfec2e35de08dbfc8 Author: Ilpo Järvinen Date: Thu Feb 22 01:13:58 2007 -0800 [TCP]: struct *sock argument renamed: sp -> sk In general, TCP code uses "sk" for struct sock pointer. Signed-off-by: Ilpo Järvinen Signed-off-by: David S. Miller commit 58582a0c9ab838458ec37c9677836daae1f499ae Author: John Heffner Date: Sun Mar 25 19:21:45 2007 -0700 [TCP]: Add RFC3742 Limited Slow-Start, controlled by variable sysctl_tcp_max_ssthresh. Signed-off-by: John Heffner Signed-off-by: David S. Miller commit 3e94e0786e38faadd7aea8629110e50dfa6d79a0 Author: Angelo P. Castellani Date: Thu Feb 22 00:23:05 2007 -0800 [TCP] YeAH-TCP: algorithm implementation YeAH-TCP is a sender-side high-speed enabled TCP congestion control algorithm, which uses a mixed loss/delay approach to compute the congestion window. It's design goals target high efficiency, internal, RTT and Reno fairness, resilience to link loss while keeping network elements load as low as possible. For further details look here: http://wil.cs.caltech.edu/pfldnet2007/paper/YeAH_TCP.pdf Signed-off-by: Angelo P. Castellani Signed-off-by: David S. Miller commit 63be5a5ea7d1ae03255b50f42e59b5b13fbc800d Author: Ilpo Järvinen Date: Wed Feb 21 23:16:38 2007 -0800 [TCP] FRTO: Sysctl documentation for SACK enhanced version The description is overly verbose to avoid ambiguity between "SACK enabled" and "SACK enhanced FRTO" Signed-off-by: Ilpo Järvinen Signed-off-by: David S. Miller commit 5661908fdfcb80873376b37aebdd259598de28ad Author: Ilpo Järvinen Date: Wed Feb 21 23:16:11 2007 -0800 [TCP]: SACK enhanced FRTO Implements the SACK-enhanced FRTO given in RFC4138 using the variant given in Appendix B. RFC4138, Appendix B: "This means that in order to declare timeout spurious, the TCP sender must receive an acknowledgment for non-retransmitted segment between SND.UNA and RecoveryPoint in algorithm step 3. RecoveryPoint is defined in conservative SACK-recovery algorithm [RFC3517]" The basic version of the FRTO algorithm can still be used also when SACK is enabled. To enabled SACK-enhanced version, tcp_frto sysctl is set to 2. Signed-off-by: Ilpo Järvinen Signed-off-by: David S. Miller commit 421a7966202153850e9851d7b50948bdcc3a4527 Author: Ilpo Järvinen Date: Wed Feb 21 23:14:42 2007 -0800 [TCP]: Prevent reordering adjustments during FRTO To be honest, I'm not too sure how the reord stuff works in the first place but this seems necessary. When FRTO has been active, the one and only retransmission could be unnecessary but the state and sending order might not be what the sacktag code expects it to be (to work correctly). Signed-off-by: Ilpo Järvinen Signed-off-by: David S. Miller commit cc3b349fb83deadd6fcf79cefb9c6dae38844113 Author: Ilpo Järvinen Date: Wed Feb 21 23:13:47 2007 -0800 [TCP] FRTO: Fake cwnd for ssthresh callback TCP without FRTO would be in Loss state with small cwnd. FRTO, however, leaves cwnd (typically) to a larger value which causes ssthresh to become too large in case RTO is triggered again compared to what conventional recovery would do. Because consecutive RTOs result in only a single ssthresh reduction, RTO+cumulative ACK+RTO pattern is required to trigger this event. A large comment is included for congestion control module writers trying to figure out what CA_EVENT_FRTO handler should do because there exists a remote possibility of incompatibility between FRTO and module defined ssthresh functions. Signed-off-by: Ilpo Järvinen Signed-off-by: David S. Miller commit d2ef9f547d59eb12047073499b1f8e221fc1bb4e Author: Ilpo Järvinen Date: Wed Feb 21 23:11:57 2007 -0800 [TCP] FRTO: Reverse RETRANS bit clearing logic Previously RETRANS bits were cleared on the entry to FRTO. We postpone that into tcp_enter_frto_loss, which is really the place were the clearing should be done anyway. This allows simplification of the logic from a clearing loop to the head skb clearing only. Besides, the other changes made in the previous patches to tcp_use_frto made it impossible for the non-SACKed FRTO to be entered if other than the head has been rexmitted. With SACK-enhanced FRTO (and Appendix B), however, there can be a number retransmissions in flight when RTO expires (same thing could happen before this patchset also with non-SACK FRTO). To not introduce any jumpiness into the packet counting during FRTO, instead of clearing RETRANS bits from skbs during entry, do it later on. Signed-off-by: Ilpo Järvinen Signed-off-by: David S. Miller commit 6df06a7ca25c3d815663ffead459a051a3babb66 Author: Ilpo Järvinen Date: Wed Feb 21 23:10:39 2007 -0800 [TCP] FRTO: Entry is allowed only during (New)Reno like recovery This interpretation comes from RFC4138: "If the sender implements some loss recovery algorithm other than Reno or NewReno [FHG04], the F-RTO algorithm SHOULD NOT be entered when earlier fast recovery is underway." I think the RFC means to say (especially in the light of Appendix B) that ...recovery is underway (not just fast recovery) or was underway when it was interrupted by an earlier (F-)RTO that hasn't yet been resolved (snd_una has not advanced enough). Thus, my interpretation is that whenever TCP has ever retransmitted other than head, basic version cannot be used because then the order assumptions which are used as FRTO basis do not hold. NewReno has only the head segment retransmitted at a time. Therefore, walk up to the segment that has not been SACKed, if that segment is not retransmitted nor anything before it, we know for sure, that nothing after the non-SACKed segment should be either. This assumption is valid because TCPCB_EVER_RETRANS does not leave holes but each non-SACKed segment is rexmitted in-order. Check for retrans_out > 1 avoids more expensive walk through the skb list, as we can know the result beforehand: F-RTO will not be allowed. SACKed skb can turn into non-SACked only in the extremely rare case of SACK reneging, in this case we might fail to detect retransmissions if there were them for any other than head. To get rid of that feature, whole rexmit queue would have to be walked (always) or FRTO should be prevented when SACK reneging happens. Of course RTO should still trigger after reneging which makes this issue even less likely to show up. And as long as the response is as conservative as it's now, nothing bad happens even then. Signed-off-by: Ilpo Järvinen Signed-off-by: David S. Miller commit c3946766d8f85a4ed805f831adf5a788638fd325 Author: Ilpo Järvinen Date: Wed Feb 21 23:08:34 2007 -0800 [TCP]: Prevent unrelated cwnd adjustment while using FRTO FRTO controls cwnd when it still processes the ACK input or it has just reverted back to conventional RTO recovery; the normal rules apply when FRTO has reverted to standard congestion control. Signed-off-by: Ilpo Järvinen Signed-off-by: David S. Miller commit 1c17bdf6c1f9f6cbb5f99d468f09803192ff4561 Author: Ilpo Järvinen Date: Wed Feb 21 23:07:27 2007 -0800 [TCP] FRTO: frto_counter modulo-op converted to two assignments Signed-off-by: Ilpo Järvinen Signed-off-by: David S. Miller commit fb52f24d3a23e25a9d5741e89f7a372b7592001f Author: Ilpo Järvinen Date: Wed Feb 21 23:06:52 2007 -0800 [TCP]: Don't enter to fast recovery while using FRTO Because TCP is not in Loss state during FRTO recovery, fast recovery could be triggered by accident. Non-SACK FRTO is more robust than not yet included SACK-enhanced version (that can receiver high number of duplicate ACKs with SACK blocks during FRTO), at least with unidirectional transfers, but under extraordinary patterns fast recovery can be incorrectly triggered, e.g., Data loss+ACK losses => cumulative ACK with enough SACK blocks to meet sacked_out >= dupthresh condition). Signed-off-by: Ilpo Järvinen Signed-off-by: David S. Miller commit 4278aacf010313ede5a9afb2da31f33da966ee88 Author: Ilpo Järvinen Date: Wed Feb 21 23:06:03 2007 -0800 [TCP] FRTO: Response should reset also snd_cwnd_cnt Since purpose is to reduce CWND, we prevent immediate growth. This is not a major issue nor is "the correct way" specified anywhere. Signed-off-by: Ilpo Järvinen Signed-off-by: David S. Miller commit 6a2a36afa91f4ad6aabea52d76e4e3f4db0001b8 Author: Ilpo Järvinen Date: Wed Feb 21 23:05:18 2007 -0800 [TCP] FRTO: fixes fallback to conventional recovery The FRTO detection did not care how ACK pattern affects to cwnd calculation of the conventional recovery. This caused incorrect setting of cwnd when the fallback becames necessary. The knowledge tcp_process_frto() has about the incoming ACK is now passed on to tcp_enter_frto_loss() in allowed_segments parameter that gives the number of segments that must be added to packets-in-flight while calculating the new cwnd. Instead of snd_una we use FLAG_DATA_ACKED in duplicate ACK detection because RFC4138 states (in Section 2.2): If the first acknowledgment after the RTO retransmission does not acknowledge all of the data that was retransmitted in step 1, the TCP sender reverts to the conventional RTO recovery. Otherwise, a malicious receiver acknowledging partial segments could cause the sender to declare the timeout spurious in a case where data was lost. If the next ACK after RTO is duplicate, we do not retransmit anything, which is equal to what conservative conventional recovery does in such case. Signed-off-by: Ilpo Järvinen Signed-off-by: David S. Miller commit 0fd52867b5d65b411b9924287ac34567e3b63bac Author: Ilpo Järvinen Date: Wed Feb 21 23:04:11 2007 -0800 [TCP] FRTO: Ignore some uninteresting ACKs Handles RFC4138 shortcoming (in step 2); it should also have case c) which ignores ACKs that are not duplicates nor advance window (opposite dir data, winupdate). Signed-off-by: Ilpo Järvinen Signed-off-by: David S. Miller commit 3277803c8d8ad3886b55366cd3512281408eb1f8 Author: Ilpo Järvinen Date: Wed Feb 21 23:03:35 2007 -0800 [TCP] FRTO: Use Disorder state during operation instead of Open Retransmission counter assumptions are to be changed. Forcing reason to do this exist: Using sysctl in check would be racy as soon as FRTO starts to ignore some ACKs (doing that in the following patches). Userspace may disable it at any moment giving nice oops if timing is right. frto_counter would be inaccessible from userspace, but with SACK enhanced FRTO retrans_out can include other than head, and possibly leaving it non-zero after spurious RTO, boom again. Luckily, solution seems rather simple: never go directly to Open state but use Disorder instead. This does not really change much, since TCP could anyway change its state to Disorder during FRTO using path tcp_fastretrans_alert -> tcp_try_to_open (e.g., when a SACK block makes ACK dubious). Besides, Disorder seems to be the state where TCP should be if not recovering (in Recovery or Loss state) while having some retransmissions in-flight (see tcp_try_to_open), which is exactly what happens with FRTO. Signed-off-by: Ilpo Järvinen Signed-off-by: David S. Miller commit ea8f5b2ad02527d40f3392a41000771d3af66466 Author: Ilpo Järvinen Date: Wed Feb 21 23:02:30 2007 -0800 [TCP] FRTO: Consecutive RTOs keep prior_ssthresh and ssthresh In case a latency spike causes more than one RTO, the later should not cause the already reduced ssthresh to propagate into the prior_ssthresh since FRTO declares all such RTOs spurious at once or none of them. In treating of ssthresh, we mimic what tcp_enter_loss() does. The previous state (in frto_counter) must be available until we have checked it in tcp_enter_frto(), and also ACK information flag in process_frto(). Signed-off-by: Ilpo Järvinen Signed-off-by: David S. Miller commit 99c8fb3a8f18e5d0006bf580e3a2dca4de786c1d Author: Ilpo Järvinen Date: Wed Feb 21 23:01:36 2007 -0800 [TCP] FRTO: Comment cleanup & improvement Moved comments out from the body of process_frto() to the head (preferred way; see Documentation/CodingStyle). Bonus: it's much easier to read in this compacted form. FRTO algorithm and implementation is described in greater detail. For interested reader, more information is available in RFC4138. Signed-off-by: Ilpo Järvinen Signed-off-by: David S. Miller commit 785613318cb6f99e95f2b0d75249e3f21a7d8fab Author: Ilpo Järvinen Date: Wed Feb 21 22:59:58 2007 -0800 [TCP] FRTO: Moved tcp_use_frto from tcp.h to tcp_input.c In addition, removed inline. Signed-off-by: Ilpo Järvinen Signed-off-by: David S. Miller commit ebf96e6e4565b22e4728e9c07936d3f529fb82d2 Author: Ilpo Järvinen Date: Wed Feb 21 22:56:19 2007 -0800 [TCP] FRTO: Separated response from FRTO detection algorithm FRTO spurious RTO detection algorithm (RFC4138) does not include response to a detected spurious RTO but can use different response algorithms. Signed-off-by: Ilpo Järvinen Signed-off-by: David S. Miller commit 1e4c71bec7a8d5530ddb6a6478003b6ea07a6b71 Author: Ilpo Järvinen Date: Wed Feb 21 22:54:52 2007 -0800 [TCP] FRTO: Incorrectly clears TCPCB_EVER_RETRANS bit FRTO was slightly too brave... Should only clear TCPCB_SACKED_RETRANS bit. Signed-off-by: Ilpo Järvinen Signed-off-by: David S. Miller commit 5d1be838d664ef5d3e54c9a91a5563e78821f96a Author: Paul Mackerras Date: Thu Apr 19 13:05:52 2007 -0700 [PPP]: Fix skbuff.c:BUG due incorrect logic in process_input_packet() This fixes: Subject: kernel BUG at net/core/skbuff.c in linux-2.6.21-rc6 process_input_packet() treats the case where the first byte is 0xff (PPP_ALLSTATIONS) but the second byte is 0x03 (PPP_UI) as indicating a packet with a PPP protocol number of 0xff. Arguably that's wrong since PPP protocol 0xff is reserved, and the RFC does envision the possibility of receiving frames where the control field has values other than 0x03. Signed-off-by: David S. Miller Signed-off-by: Andrew Morton --- CREDITS | 14 Documentation/feature-removal-schedule.txt | 21 Documentation/filesystems/proc.txt | 9 Documentation/networking/dccp.txt | 10 Documentation/networking/ip-sysctl.txt | 31 MAINTAINERS | 38 arch/ia64/hp/sim/simeth.c | 3 arch/ia64/sn/kernel/xpnet.c | 18 arch/ppc/8260_io/enet.c | 1 arch/ppc/8260_io/fcc_enet.c | 1 arch/ppc/8xx_io/enet.c | 1 arch/ppc/8xx_io/fec.c | 1 arch/s390/appldata/appldata_net_sum.c | 4 arch/s390/lib/Makefile | 2 arch/s390/lib/div64.c | 2 arch/um/drivers/daemon_kern.c | 2 arch/um/drivers/mcast_kern.c | 2 arch/um/drivers/net_kern.c | 2 arch/um/drivers/pcap_kern.c | 2 arch/um/drivers/slip_kern.c | 2 arch/um/drivers/slirp_kern.c | 2 arch/um/os-Linux/drivers/ethertap_kern.c | 2 arch/um/os-Linux/drivers/tuntap_kern.c | 2 arch/xtensa/platform-iss/network.c | 2 drivers/atm/ambassador.c | 2 drivers/atm/atmtcp.c | 4 drivers/atm/eni.c | 4 drivers/atm/eni.h | 2 drivers/atm/he.c | 4 drivers/atm/idt77252.c | 28 drivers/atm/nicstar.c | 14 drivers/block/aoe/aoe.h | 9 drivers/block/aoe/aoecmd.c | 17 drivers/block/aoe/aoenet.c | 2 drivers/bluetooth/bfusb.c | 2 drivers/bluetooth/bluecard_cs.c | 6 drivers/bluetooth/bpa10x.c | 4 drivers/bluetooth/bt3c_cs.c | 6 drivers/bluetooth/btuart_cs.c | 6 drivers/bluetooth/dtl1_cs.c | 2 drivers/bluetooth/hci_h4.c | 6 drivers/char/pcmcia/synclink_cs.c | 2 drivers/char/random.c | 38 drivers/connector/connector.c | 4 drivers/ieee1394/eth1394.c | 2 drivers/ieee1394/eth1394.h | 2 drivers/infiniband/hw/amso1100/c2.c | 6 drivers/infiniband/hw/cxgb3/iwch_cm.c | 17 drivers/infiniband/ulp/ipoib/ipoib_cm.c | 2 drivers/infiniband/ulp/ipoib/ipoib_ib.c | 2 drivers/isdn/act2000/module.c | 2 drivers/isdn/gigaset/usb-gigaset.c | 2 drivers/isdn/hardware/avm/b1dma.c | 3 drivers/isdn/hardware/avm/c4.c | 3 drivers/isdn/hisax/elsa_ser.c | 6 drivers/isdn/hisax/isdnl2.c | 3 drivers/isdn/hysdn/hycapi.c | 5 drivers/isdn/hysdn/hysdn_net.c | 2 drivers/isdn/hysdn/hysdn_sched.c | 5 drivers/isdn/i4l/isdn_common.c | 2 drivers/isdn/i4l/isdn_net.c | 11 drivers/isdn/i4l/isdn_ppp.c | 9 drivers/isdn/isdnloop/isdnloop.c | 3 drivers/isdn/pcbit/capi.c | 12 drivers/media/dvb/dvb-core/dvb_net.c | 16 drivers/message/fusion/mptlan.c | 36 drivers/net/3c501.c | 1 drivers/net/3c505.c | 3 drivers/net/3c507.c | 1 drivers/net/3c509.c | 1 drivers/net/3c515.c | 2 drivers/net/3c523.c | 3 drivers/net/3c527.c | 1 drivers/net/3c59x.c | 2 drivers/net/7990.c | 3 drivers/net/8139cp.c | 6 drivers/net/8139too.c | 7 drivers/net/82596.c | 1 drivers/net/Makefile | 2 drivers/net/a2065.c | 3 drivers/net/acenic.c | 1 drivers/net/amd8111e.c | 4 drivers/net/appletalk/cops.c | 4 drivers/net/appletalk/ltpc.c | 15 drivers/net/arcnet/arc-rawmode.c | 2 drivers/net/arcnet/arcnet.c | 17 drivers/net/arcnet/capmode.c | 14 drivers/net/arcnet/rfc1051.c | 2 drivers/net/arcnet/rfc1201.c | 2 drivers/net/ariadne.c | 1 drivers/net/arm/am79c961a.c | 1 drivers/net/arm/at91_ether.c | 1 drivers/net/arm/ep93xx_eth.c | 1 drivers/net/arm/ether1.c | 1 drivers/net/arm/ether3.c | 1 drivers/net/at1700.c | 1 drivers/net/atari_bionet.c | 7 drivers/net/atari_pamsnet.c | 6 drivers/net/atarilance.c | 1 drivers/net/atl1/atl1_main.c | 33 drivers/net/atp.c | 1 drivers/net/au1000_eth.c | 3 drivers/net/b44.c | 8 drivers/net/bmac.c | 1 drivers/net/bnx2.c | 35 drivers/net/bonding/bond_3ad.c | 8 drivers/net/bonding/bond_alb.c | 36 drivers/net/bonding/bond_main.c | 9 drivers/net/cassini.c | 11 drivers/net/chelsio/sge.c | 39 drivers/net/cris/eth_v10.c | 4 drivers/net/cs89x0.c | 2 drivers/net/cxgb3/cxgb3_offload.c | 2 drivers/net/cxgb3/sge.c | 39 drivers/net/de600.c | 1 drivers/net/de620.c | 1 drivers/net/declance.c | 1 drivers/net/defxx.c | 6 drivers/net/depca.c | 1 drivers/net/dgrs.c | 3 drivers/net/dl2k.c | 4 drivers/net/dm9000.c | 1 drivers/net/e100.c | 2 drivers/net/e1000/e1000_main.c | 17 drivers/net/eepro.c | 1 drivers/net/eepro100.c | 6 drivers/net/eexpress.c | 1 drivers/net/ehea/ehea_main.c | 35 drivers/net/epic100.c | 3 drivers/net/eth16i.c | 1 drivers/net/ewrk3.c | 1 drivers/net/fealnx.c | 1 drivers/net/fec.c | 1 drivers/net/fec_8xx/fec_main.c | 5 drivers/net/forcedeth.c | 30 drivers/net/fs_enet/fs_enet-main.c | 9 drivers/net/gianfar.c | 12 drivers/net/hamachi.c | 1 drivers/net/hamradio/bpqether.c | 2 drivers/net/hamradio/dmascc.c | 2 drivers/net/hamradio/hdlcdrv.c | 4 drivers/net/hamradio/yam.c | 4 drivers/net/hp100.c | 1 drivers/net/ibm_emac/ibm_emac_core.c | 3 drivers/net/ibmlana.c | 1 drivers/net/ibmveth.c | 1 drivers/net/ioc3-eth.c | 13 drivers/net/irda/ali-ircc.c | 9 drivers/net/irda/au1k_ir.c | 6 drivers/net/irda/donauboe.c | 8 drivers/net/irda/irda-usb.c | 6 drivers/net/irda/mcs7780.c | 38 drivers/net/irda/nsc-ircc.c | 15 drivers/net/irda/pxaficp_ir.c | 6 drivers/net/irda/sa1100_ir.c | 2 drivers/net/irda/smsc-ircc2.c | 5 drivers/net/irda/stir4200.c | 5 drivers/net/irda/via-ircc.c | 18 drivers/net/irda/vlsi_ir.c | 4 drivers/net/irda/w83977af_ir.c | 12 drivers/net/iseries_veth.c | 1 drivers/net/ixgb/ixgb_main.c | 36 drivers/net/ixp2000/ixpdev.c | 3 drivers/net/lance.c | 3 drivers/net/lasi_82596.c | 1 drivers/net/lib8390.c | 1 drivers/net/loopback.c | 24 drivers/net/lp486e.c | 1 drivers/net/mac89x0.c | 1 drivers/net/macb.c | 11 drivers/net/mace.c | 1 drivers/net/macmace.c | 4 drivers/net/meth.c | 11 drivers/net/mipsnet.c | 1 drivers/net/mv643xx_eth.c | 9 drivers/net/myri10ge/myri10ge.c | 7 drivers/net/myri_sbus.c | 4 drivers/net/natsemi.c | 1 drivers/net/netx-eth.c | 1 drivers/net/netxen/netxen_nic_hw.c | 15 drivers/net/netxen/netxen_nic_init.c | 1 drivers/net/netxen/netxen_nic_main.c | 12 drivers/net/ni5010.c | 1 drivers/net/ni52.c | 3 drivers/net/ni65.c | 7 drivers/net/ns83820.c | 5 drivers/net/pasemi_mac.c | 14 drivers/net/pci-skeleton.c | 3 drivers/net/pcmcia/3c574_cs.c | 1 drivers/net/pcmcia/3c589_cs.c | 1 drivers/net/pcmcia/axnet_cs.c | 3 drivers/net/pcmcia/fmvj18x_cs.c | 1 drivers/net/pcmcia/nmclan_cs.c | 4 drivers/net/pcmcia/smc91c92_cs.c | 1 drivers/net/pcmcia/xirc2ps_cs.c | 1 drivers/net/pcnet32.c | 1 drivers/net/plip.c | 2 drivers/net/ppp_generic.c | 2 drivers/net/ppp_synctty.c | 3 drivers/net/pppoe.c | 164 drivers/net/pppox.c | 2 drivers/net/qla3xxx.c | 5 drivers/net/r8169.c | 3 drivers/net/rionet.c | 1 drivers/net/rrunner.c | 3 drivers/net/s2io.c | 4 drivers/net/saa9730.c | 1 drivers/net/sb1000.c | 2 drivers/net/sb1250-mac.c | 3 drivers/net/sc92031.c | 1 drivers/net/seeq8005.c | 1 drivers/net/sgiseeq.c | 3 drivers/net/sis190.c | 1 drivers/net/sis900.c | 3 drivers/net/sk98lin/skge.c | 12 drivers/net/skfp/skfddi.c | 3 drivers/net/skge.c | 6 drivers/net/sky2.c | 11 drivers/net/slip.c | 2 drivers/net/smc911x.c | 2 drivers/net/smc9194.c | 1 drivers/net/smc91x.c | 1 drivers/net/sonic.c | 2 drivers/net/spider_net.c | 3 drivers/net/starfire.c | 1 drivers/net/sun3_82586.c | 3 drivers/net/sun3lance.c | 5 drivers/net/sunbmac.c | 1 drivers/net/sundance.c | 1 drivers/net/sungem.c | 9 drivers/net/sunhme.c | 9 drivers/net/sunlance.c | 4 drivers/net/sunqe.c | 3 drivers/net/tc35815.c | 1 drivers/net/tg3.c | 53 drivers/net/tlan.c | 4 drivers/net/tokenring/3c359.c | 11 drivers/net/tokenring/ibmtr.c | 1 drivers/net/tokenring/lanstreamer.c | 7 drivers/net/tokenring/olympic.c | 18 drivers/net/tokenring/smctr.c | 6 drivers/net/tokenring/tms380tr.c | 6 drivers/net/tsi108_eth.c | 1 drivers/net/tulip/de2104x.c | 5 drivers/net/tulip/de4x5.c | 2 drivers/net/tulip/dmfe.c | 12 drivers/net/tulip/interrupt.c | 2 drivers/net/tulip/uli526x.c | 22 drivers/net/tulip/winbond-840.c | 3 drivers/net/tulip/xircom_cb.c | 7 drivers/net/tulip/xircom_tulip_cb.c | 5 drivers/net/tun.c | 8 drivers/net/typhoon.c | 1 drivers/net/via-rhine.c | 1 drivers/net/via-velocity.c | 12 drivers/net/wan/cosa.c | 2 drivers/net/wan/cycx_x25.c | 2 drivers/net/wan/dlci.c | 2 drivers/net/wan/dscc4.c | 3 drivers/net/wan/farsync.c | 2 drivers/net/wan/hdlc_cisco.c | 2 drivers/net/wan/hdlc_fr.c | 5 drivers/net/wan/hostess_sv11.c | 2 drivers/net/wan/lmc/lmc_main.c | 16 drivers/net/wan/pc300_drv.c | 6 drivers/net/wan/pc300_tty.c | 6 drivers/net/wan/sbni.c | 5 drivers/net/wan/sealevel.c | 2 drivers/net/wan/syncppp.c | 2 drivers/net/wan/z85230.c | 4 drivers/net/wireless/Kconfig | 120 drivers/net/wireless/airo.c | 11 drivers/net/wireless/arlan-main.c | 1 drivers/net/wireless/atmel.c | 6 drivers/net/wireless/bcm43xx/Kconfig | 3 drivers/net/wireless/bcm43xx/bcm43xx_dma.c | 3 drivers/net/wireless/hostap/Kconfig | 3 drivers/net/wireless/hostap/hostap_80211_rx.c | 23 drivers/net/wireless/hostap/hostap_80211_tx.c | 25 drivers/net/wireless/hostap/hostap_ap.c | 7 drivers/net/wireless/hostap/hostap_hw.c | 7 drivers/net/wireless/hostap/hostap_main.c | 17 drivers/net/wireless/ipw2100.c | 5 drivers/net/wireless/ipw2200.c | 4 drivers/net/wireless/netwave_cs.c | 1 drivers/net/wireless/orinoco.c | 5 drivers/net/wireless/prism54/islpci_eth.c | 23 drivers/net/wireless/ray_cs.c | 4 drivers/net/wireless/strip.c | 2 drivers/net/wireless/wavelan.c | 9 drivers/net/wireless/wavelan_cs.c | 6 drivers/net/wireless/zd1201.c | 6 drivers/net/wireless/zd1211rw/Kconfig | 3 drivers/net/yellowfin.c | 1 drivers/net/znet.c | 1 drivers/parisc/led.c | 4 drivers/s390/net/claw.c | 2 drivers/s390/net/ctcmain.c | 28 drivers/s390/net/lcs.c | 3 drivers/s390/net/netiucv.c | 21 drivers/s390/net/qeth_eddp.c | 30 drivers/s390/net/qeth_main.c | 45 drivers/s390/net/qeth_tso.h | 14 drivers/scsi/scsi_netlink.c | 5 drivers/scsi/scsi_transport_iscsi.c | 4 drivers/usb/atm/usbatm.c | 10 drivers/usb/gadget/ether.c | 1 drivers/usb/net/asix.c | 8 drivers/usb/net/catc.c | 3 drivers/usb/net/gl620a.c | 2 drivers/usb/net/kaweth.c | 2 drivers/usb/net/net1080.c | 2 drivers/usb/net/pegasus.c | 7 drivers/usb/net/rndis_host.c | 2 drivers/usb/net/rtl8150.c | 1 drivers/usb/net/usbnet.c | 1 fs/compat_ioctl.c | 18 fs/ecryptfs/netlink.c | 6 include/asm-alpha/socket.h | 2 include/asm-alpha/sockios.h | 3 include/asm-arm/div64.h | 3 include/asm-arm/socket.h | 2 include/asm-arm/sockios.h | 3 include/asm-arm26/socket.h | 2 include/asm-arm26/sockios.h | 3 include/asm-avr32/socket.h | 2 include/asm-avr32/sockios.h | 3 include/asm-cris/socket.h | 2 include/asm-cris/sockios.h | 3 include/asm-frv/socket.h | 2 include/asm-frv/sockios.h | 3 include/asm-generic/div64.h | 7 include/asm-h8300/socket.h | 2 include/asm-h8300/sockios.h | 3 include/asm-i386/div64.h | 4 include/asm-i386/socket.h | 2 include/asm-i386/sockios.h | 3 include/asm-ia64/socket.h | 2 include/asm-ia64/sockios.h | 3 include/asm-m32r/socket.h | 2 include/asm-m32r/sockios.h | 3 include/asm-m68k/div64.h | 3 include/asm-m68k/socket.h | 2 include/asm-m68k/sockios.h | 3 include/asm-mips/div64.h | 11 include/asm-mips/socket.h | 2 include/asm-mips/sockios.h | 3 include/asm-parisc/socket.h | 2 include/asm-parisc/sockios.h | 3 include/asm-powerpc/socket.h | 2 include/asm-powerpc/sockios.h | 3 include/asm-s390/socket.h | 2 include/asm-s390/sockios.h | 3 include/asm-sh/socket.h | 2 include/asm-sh/sockios.h | 3 include/asm-sh64/sockios.h | 3 include/asm-sparc/socket.h | 2 include/asm-sparc/sockios.h | 3 include/asm-sparc64/socket.h | 2 include/asm-sparc64/sockios.h | 3 include/asm-um/div64.h | 1 include/asm-v850/socket.h | 2 include/asm-v850/sockios.h | 3 include/asm-x86_64/socket.h | 2 include/asm-x86_64/sockios.h | 3 include/asm-xtensa/div64.h | 6 include/asm-xtensa/socket.h | 2 include/asm-xtensa/sockios.h | 3 include/linux/Kbuild | 7 include/linux/atalk.h | 4 include/linux/dccp.h | 46 include/linux/fib_rules.h | 15 include/linux/hdlc.h | 4 include/linux/icmp.h | 9 include/linux/icmpv6.h | 9 include/linux/if_addr.h | 1 include/linux/if_arp.h | 9 include/linux/if_bridge.h | 3 include/linux/if_ether.h | 2 include/linux/if_link.h | 1 include/linux/if_packet.h | 1 include/linux/if_pppox.h | 10 include/linux/if_tr.h | 2 include/linux/if_vlan.h | 6 include/linux/igmp.h | 21 include/linux/in.h | 1 include/linux/in6.h | 1 include/linux/ip.h | 14 include/linux/ipv6.h | 14 include/linux/jhash.h | 2 include/linux/netdevice.h | 6 include/linux/netfilter.h | 12 include/linux/netfilter/nf_conntrack_tcp.h | 5 include/linux/netfilter/nfnetlink.h | 19 include/linux/netfilter/nfnetlink_conntrack.h | 4 include/linux/netfilter_bridge.h | 11 include/linux/netfilter_bridge/ebt_802_3.h | 2 include/linux/netfilter_bridge/ebt_arp.h | 4 include/linux/netfilter_ipv4/Kbuild | 14 include/linux/netfilter_ipv4/ip_conntrack.h | 402 -- include/linux/netfilter_ipv4/ip_conntrack_amanda.h | 11 include/linux/netfilter_ipv4/ip_conntrack_core.h | 61 include/linux/netfilter_ipv4/ip_conntrack_ftp.h | 44 include/linux/netfilter_ipv4/ip_conntrack_h323.h | 89 include/linux/netfilter_ipv4/ip_conntrack_helper.h | 46 include/linux/netfilter_ipv4/ip_conntrack_icmp.h | 6 include/linux/netfilter_ipv4/ip_conntrack_irc.h | 32 include/linux/netfilter_ipv4/ip_conntrack_pptp.h | 326 - include/linux/netfilter_ipv4/ip_conntrack_proto_gre.h | 114 include/linux/netfilter_ipv4/ip_conntrack_protocol.h | 98 include/linux/netfilter_ipv4/ip_conntrack_sctp.h | 6 include/linux/netfilter_ipv4/ip_conntrack_sip.h | 40 include/linux/netfilter_ipv4/ip_conntrack_tcp.h | 6 include/linux/netfilter_ipv4/ip_conntrack_tftp.h | 20 include/linux/netfilter_ipv4/ip_conntrack_tuple.h | 146 include/linux/netfilter_ipv4/ip_nat.h | 79 include/linux/netfilter_ipv4/ip_nat_core.h | 18 include/linux/netfilter_ipv4/ip_nat_helper.h | 33 include/linux/netfilter_ipv4/ip_nat_pptp.h | 11 include/linux/netfilter_ipv4/ip_nat_protocol.h | 74 include/linux/netfilter_ipv4/ip_nat_rule.h | 28 include/linux/netfilter_ipv4/ipt_SAME.h | 2 include/linux/netlink.h | 33 include/linux/nl80211.h | 38 include/linux/rtnetlink.h | 13 include/linux/sctp.h | 9 include/linux/skbuff.h | 387 +- include/linux/sysctl.h | 5 include/linux/tcp.h | 21 include/linux/udp.h | 9 include/net/addrconf.h | 4 include/net/ax25.h | 2 include/net/bluetooth/hci.h | 18 include/net/cfg80211.h | 36 include/net/cipso_ipv4.h | 2 include/net/compat.h | 1 include/net/dn_fib.h | 9 include/net/dn_route.h | 1 include/net/esp.h | 2 include/net/fib_rules.h | 20 include/net/inet6_hashtables.h | 12 include/net/inet_ecn.h | 8 include/net/inet_sock.h | 11 include/net/ip.h | 10 include/net/ip6_fib.h | 2 include/net/ip6_route.h | 5 include/net/ip_fib.h | 6 include/net/ipv6.h | 1 include/net/ipx.h | 2 include/net/iw_handler.h | 10 include/net/llc_pdu.h | 15 include/net/neighbour.h | 10 include/net/netfilter/nf_conntrack.h | 5 include/net/netfilter/nf_conntrack_compat.h | 145 include/net/netfilter/nf_conntrack_core.h | 3 include/net/netfilter/nf_conntrack_ecache.h | 30 include/net/netfilter/nf_conntrack_l3proto.h | 5 include/net/netfilter/nf_nat_rule.h | 10 include/net/netlink.h | 16 include/net/pkt_cls.h | 10 include/net/pkt_sched.h | 182 include/net/red.h | 10 include/net/rtnetlink.h | 26 include/net/sch_generic.h | 12 include/net/sctp/constants.h | 2 include/net/sctp/structs.h | 5 include/net/sctp/ulpevent.h | 1 include/net/sctp/ulpqueue.h | 2 include/net/sctp/user.h | 25 include/net/sock.h | 89 include/net/tcp.h | 181 include/net/tcp_ecn.h | 17 include/net/udp.h | 11 include/net/udplite.h | 45 include/net/wireless.h | 139 include/net/x25device.h | 2 include/net/xfrm.h | 2 kernel/audit.c | 16 kernel/hrtimer.c | 1 kernel/taskstats.c | 4 kernel/time.c | 2 lib/Makefile | 5 lib/div64.c | 22 lib/kobject_uevent.c | 2 net/802/fddi.c | 7 net/802/hippi.c | 12 net/802/psnap.c | 4 net/802/tr.c | 9 net/8021q/vlan.c | 6 net/8021q/vlan_dev.c | 14 net/Kconfig | 18 net/Makefile | 2 net/appletalk/aarp.c | 14 net/appletalk/ddp.c | 21 net/atm/br2684.c | 8 net/atm/clip.c | 2 net/atm/ioctl.c | 3 net/atm/lec.c | 13 net/atm/mpc.c | 15 net/ax25/af_ax25.c | 113 net/ax25/ax25_ds_subr.c | 2 net/ax25/ax25_in.c | 24 net/ax25/ax25_ip.c | 4 net/ax25/ax25_out.c | 12 net/ax25/ax25_subr.c | 4 net/bluetooth/af_bluetooth.c | 2 net/bluetooth/bnep/core.c | 16 net/bluetooth/cmtp/core.c | 4 net/bluetooth/hci_conn.c | 36 net/bluetooth/hci_core.c | 35 net/bluetooth/hci_event.c | 8 net/bluetooth/hci_sock.c | 2 net/bluetooth/l2cap.c | 76 net/bluetooth/rfcomm/core.c | 2 net/bluetooth/sco.c | 2 net/bridge/br.c | 12 net/bridge/br_device.c | 22 net/bridge/br_fdb.c | 42 net/bridge/br_forward.c | 2 net/bridge/br_if.c | 6 net/bridge/br_input.c | 41 net/bridge/br_ioctl.c | 5 net/bridge/br_netfilter.c | 142 net/bridge/br_netlink.c | 24 net/bridge/br_notify.c | 13 net/bridge/br_private.h | 23 net/bridge/br_stp.c | 10 net/bridge/br_stp_bpdu.c | 19 net/bridge/br_stp_if.c | 59 net/bridge/br_sysfs_br.c | 18 net/bridge/br_sysfs_if.c | 8 net/bridge/netfilter/ebt_arp.c | 48 net/bridge/netfilter/ebt_log.c | 12 net/bridge/netfilter/ebt_ulog.c | 12 net/compat.c | 79 net/core/datagram.c | 10 net/core/dev.c | 296 + net/core/dev_mcast.c | 2 net/core/ethtool.c | 4 net/core/fib_rules.c | 161 net/core/filter.c | 6 net/core/gen_stats.c | 4 net/core/link_watch.c | 2 net/core/neighbour.c | 34 net/core/net-sysfs.c | 4 net/core/netpoll.c | 23 net/core/pktgen.c | 299 - net/core/rtnetlink.c | 304 + net/core/skbuff.c | 373 +- net/core/sock.c | 769 ++-- net/core/sysctl_net_core.c | 8 net/core/utils.c | 6 net/core/wireless.c | 954 ----- net/dccp/ackvec.c | 2 net/dccp/ccids/ccid3.c | 320 - net/dccp/ccids/ccid3.h | 10 net/dccp/ccids/lib/loss_interval.c | 2 net/dccp/dccp.h | 75 net/dccp/input.c | 54 net/dccp/ipv4.c | 43 net/dccp/ipv6.c | 40 net/dccp/options.c | 18 net/dccp/output.c | 3 net/dccp/probe.c | 17 net/decnet/af_decnet.c | 1 net/decnet/dn_dev.c | 31 net/decnet/dn_fib.c | 8 net/decnet/dn_neigh.c | 6 net/decnet/dn_nsp_in.c | 7 net/decnet/dn_nsp_out.c | 8 net/decnet/dn_route.c | 28 net/decnet/dn_rules.c | 6 net/decnet/dn_table.c | 11 net/decnet/netfilter/dn_rtmsg.c | 8 net/econet/af_econet.c | 15 net/ethernet/eth.c | 5 net/ieee80211/Kconfig | 3 net/ieee80211/ieee80211_crypt_wep.c | 2 net/ieee80211/ieee80211_rx.c | 21 net/ieee80211/ieee80211_tx.c | 12 net/ipv4/Kconfig | 27 net/ipv4/Makefile | 2 net/ipv4/af_inet.c | 106 net/ipv4/ah4.c | 14 net/ipv4/arp.c | 16 net/ipv4/cipso_ipv4.c | 4 net/ipv4/devinet.c | 37 net/ipv4/esp4.c | 59 net/ipv4/fib_frontend.c | 17 net/ipv4/fib_hash.c | 2 net/ipv4/fib_rules.c | 11 net/ipv4/fib_semantics.c | 2 net/ipv4/fib_trie.c | 51 net/ipv4/icmp.c | 31 net/ipv4/igmp.c | 43 net/ipv4/inet_diag.c | 90 net/ipv4/inetpeer.c | 38 net/ipv4/ip_forward.c | 14 net/ipv4/ip_fragment.c | 47 net/ipv4/ip_gre.c | 40 net/ipv4/ip_input.c | 24 net/ipv4/ip_options.c | 26 net/ipv4/ip_output.c | 123 net/ipv4/ip_sockglue.c | 1175 +++--- net/ipv4/ipcomp.c | 58 net/ipv4/ipconfig.c | 19 net/ipv4/ipip.c | 38 net/ipv4/ipmr.c | 418 +- net/ipv4/ipvs/ip_vs_app.c | 14 net/ipv4/ipvs/ip_vs_core.c | 56 net/ipv4/ipvs/ip_vs_dh.c | 2 net/ipv4/ipvs/ip_vs_ftp.c | 8 net/ipv4/ipvs/ip_vs_lblc.c | 2 net/ipv4/ipvs/ip_vs_lblcr.c | 2 net/ipv4/ipvs/ip_vs_proto_ah.c | 16 net/ipv4/ipvs/ip_vs_proto_tcp.c | 24 net/ipv4/ipvs/ip_vs_proto_udp.c | 26 net/ipv4/ipvs/ip_vs_sh.c | 2 net/ipv4/ipvs/ip_vs_xmit.c | 44 net/ipv4/multipath_drr.c | 2 net/ipv4/netfilter.c | 8 net/ipv4/netfilter/Kconfig | 267 - net/ipv4/netfilter/Makefile | 45 net/ipv4/netfilter/arp_tables.c | 4 net/ipv4/netfilter/arpt_mangle.c | 12 net/ipv4/netfilter/ip_conntrack_amanda.c | 229 - net/ipv4/netfilter/ip_conntrack_core.c | 1550 -------- net/ipv4/netfilter/ip_conntrack_ftp.c | 520 -- net/ipv4/netfilter/ip_conntrack_helper_h323.c | 1841 ---------- net/ipv4/netfilter/ip_conntrack_helper_pptp.c | 684 --- net/ipv4/netfilter/ip_conntrack_irc.c | 314 - net/ipv4/netfilter/ip_conntrack_netbios_ns.c | 143 net/ipv4/netfilter/ip_conntrack_netlink.c | 1577 -------- net/ipv4/netfilter/ip_conntrack_proto_generic.c | 74 net/ipv4/netfilter/ip_conntrack_proto_gre.c | 328 - net/ipv4/netfilter/ip_conntrack_proto_icmp.c | 315 - net/ipv4/netfilter/ip_conntrack_proto_sctp.c | 659 --- net/ipv4/netfilter/ip_conntrack_proto_tcp.c | 1164 ------ net/ipv4/netfilter/ip_conntrack_proto_udp.c | 148 net/ipv4/netfilter/ip_conntrack_sip.c | 520 -- net/ipv4/netfilter/ip_conntrack_standalone.c | 962 ----- net/ipv4/netfilter/ip_conntrack_tftp.c | 161 net/ipv4/netfilter/ip_nat_amanda.c | 85 net/ipv4/netfilter/ip_nat_core.c | 634 --- net/ipv4/netfilter/ip_nat_ftp.c | 180 net/ipv4/netfilter/ip_nat_helper.c | 436 -- net/ipv4/netfilter/ip_nat_helper_h323.c | 611 --- net/ipv4/netfilter/ip_nat_helper_pptp.c | 350 - net/ipv4/netfilter/ip_nat_irc.c | 122 net/ipv4/netfilter/ip_nat_proto_gre.c | 174 net/ipv4/netfilter/ip_nat_proto_icmp.c | 87 net/ipv4/netfilter/ip_nat_proto_tcp.c | 154 net/ipv4/netfilter/ip_nat_proto_udp.c | 144 net/ipv4/netfilter/ip_nat_proto_unknown.c | 55 net/ipv4/netfilter/ip_nat_rule.c | 314 - net/ipv4/netfilter/ip_nat_sip.c | 282 - net/ipv4/netfilter/ip_nat_snmp_basic.c | 1333 ------- net/ipv4/netfilter/ip_nat_standalone.c | 388 -- net/ipv4/netfilter/ip_nat_tftp.c | 70 net/ipv4/netfilter/ip_queue.c | 28 net/ipv4/netfilter/ip_tables.c | 12 net/ipv4/netfilter/ipt_CLUSTERIP.c | 24 net/ipv4/netfilter/ipt_ECN.c | 15 net/ipv4/netfilter/ipt_LOG.c | 16 net/ipv4/netfilter/ipt_MASQUERADE.c | 57 net/ipv4/netfilter/ipt_NETMAP.c | 26 net/ipv4/netfilter/ipt_REDIRECT.c | 24 net/ipv4/netfilter/ipt_REJECT.c | 45 net/ipv4/netfilter/ipt_SAME.c | 40 net/ipv4/netfilter/ipt_TOS.c | 4 net/ipv4/netfilter/ipt_TTL.c | 2 net/ipv4/netfilter/ipt_ULOG.c | 77 net/ipv4/netfilter/ipt_addrtype.c | 2 net/ipv4/netfilter/ipt_ecn.c | 10 net/ipv4/netfilter/ipt_iprange.c | 2 net/ipv4/netfilter/ipt_recent.c | 6 net/ipv4/netfilter/ipt_tos.c | 2 net/ipv4/netfilter/ipt_ttl.c | 11 net/ipv4/netfilter/iptable_filter.c | 3 net/ipv4/netfilter/iptable_mangle.c | 30 net/ipv4/netfilter/nf_conntrack_l3proto_ipv4.c | 27 net/ipv4/netfilter/nf_conntrack_proto_icmp.c | 11 net/ipv4/netfilter/nf_nat_core.c | 14 net/ipv4/netfilter/nf_nat_h323.c | 14 net/ipv4/netfilter/nf_nat_helper.c | 76 net/ipv4/netfilter/nf_nat_pptp.c | 2 net/ipv4/netfilter/nf_nat_rule.c | 2 net/ipv4/netfilter/nf_nat_sip.c | 11 net/ipv4/netfilter/nf_nat_snmp_basic.c | 8 net/ipv4/netfilter/nf_nat_standalone.c | 18 net/ipv4/proc.c | 25 net/ipv4/protocol.c | 2 net/ipv4/raw.c | 18 net/ipv4/route.c | 29 net/ipv4/syncookies.c | 40 net/ipv4/sysctl_net_ipv4.c | 16 net/ipv4/tcp.c | 131 net/ipv4/tcp_cong.c | 31 net/ipv4/tcp_cubic.c | 79 net/ipv4/tcp_hybla.c | 2 net/ipv4/tcp_illinois.c | 284 + net/ipv4/tcp_input.c | 617 ++- net/ipv4/tcp_ipv4.c | 143 net/ipv4/tcp_minisocks.c | 29 net/ipv4/tcp_output.c | 196 - net/ipv4/tcp_probe.c | 68 net/ipv4/tcp_timer.c | 10 net/ipv4/tcp_vegas.c | 16 net/ipv4/tcp_westwood.c | 17 net/ipv4/tcp_yeah.c | 271 + net/ipv4/tcp_yeah.h | 135 net/ipv4/udp.c | 238 - net/ipv4/udplite.c | 2 net/ipv4/xfrm4_input.c | 23 net/ipv4/xfrm4_mode_beet.c | 37 net/ipv4/xfrm4_mode_transport.c | 28 net/ipv4/xfrm4_mode_tunnel.c | 31 net/ipv4/xfrm4_output.c | 3 net/ipv4/xfrm4_policy.c | 8 net/ipv4/xfrm4_tunnel.c | 3 net/ipv6/Kconfig | 10 net/ipv6/Makefile | 2 net/ipv6/addrconf.c | 178 net/ipv6/af_inet6.c | 58 net/ipv6/ah6.c | 34 net/ipv6/datagram.c | 63 net/ipv6/esp6.c | 52 net/ipv6/exthdrs.c | 117 net/ipv6/fib6_rules.c | 39 net/ipv6/icmp.c | 48 net/ipv6/ip6_fib.c | 4 net/ipv6/ip6_input.c | 18 net/ipv6/ip6_output.c | 187 - net/ipv6/ip6_tunnel.c | 643 ++- net/ipv6/ipcomp6.c | 16 net/ipv6/ipv6_sockglue.c | 46 net/ipv6/ipv6_syms.c | 36 net/ipv6/mcast.c | 46 net/ipv6/mip6.c | 60 net/ipv6/ndisc.c | 182 net/ipv6/netfilter.c | 8 net/ipv6/netfilter/ip6_queue.c | 28 net/ipv6/netfilter/ip6_tables.c | 17 net/ipv6/netfilter/ip6t_HL.c | 2 net/ipv6/netfilter/ip6t_LOG.c | 21 net/ipv6/netfilter/ip6t_REJECT.c | 11 net/ipv6/netfilter/ip6t_eui64.c | 8 net/ipv6/netfilter/ip6t_hl.c | 2 net/ipv6/netfilter/ip6t_ipv6header.c | 2 net/ipv6/netfilter/ip6table_filter.c | 2 net/ipv6/netfilter/ip6table_mangle.c | 18 net/ipv6/netfilter/nf_conntrack_l3proto_ipv6.c | 30 net/ipv6/netfilter/nf_conntrack_proto_icmpv6.c | 7 net/ipv6/netfilter/nf_conntrack_reasm.c | 59 net/ipv6/proc.c | 64 net/ipv6/protocol.c | 4 net/ipv6/raw.c | 52 net/ipv6/reassembly.c | 62 net/ipv6/route.c | 24 net/ipv6/sit.c | 35 net/ipv6/tcp_ipv6.c | 120 net/ipv6/udp.c | 123 net/ipv6/udplite.c | 2 net/ipv6/xfrm6_input.c | 14 net/ipv6/xfrm6_mode_beet.c | 27 net/ipv6/xfrm6_mode_ro.c | 7 net/ipv6/xfrm6_mode_transport.c | 20 net/ipv6/xfrm6_mode_tunnel.c | 36 net/ipv6/xfrm6_output.c | 6 net/ipv6/xfrm6_policy.c | 25 net/ipv6/xfrm6_tunnel.c | 2 net/ipx/af_ipx.c | 8 net/ipx/ipx_route.c | 4 net/irda/af_irda.c | 136 net/irda/ircomm/ircomm_param.c | 4 net/irda/irlan/irlan_common.c | 2 net/irda/irlan/irlan_eth.c | 3 net/irda/irlap_event.c | 2 net/irda/irlap_frame.c | 18 net/irda/irqueue.c | 9 net/irda/irttp.c | 10 net/irda/parameters.c | 8 net/irda/qos.c | 14 net/irda/wrapper.c | 5 net/iucv/af_iucv.c | 6 net/key/af_key.c | 4 net/llc/llc_input.c | 2 net/llc/llc_output.c | 8 net/llc/llc_sap.c | 5 net/netfilter/Kconfig | 63 net/netfilter/core.c | 21 net/netfilter/nf_conntrack_core.c | 58 net/netfilter/nf_conntrack_ecache.c | 23 net/netfilter/nf_conntrack_expect.c | 4 net/netfilter/nf_conntrack_ftp.c | 6 net/netfilter/nf_conntrack_netbios_ns.c | 2 net/netfilter/nf_conntrack_netlink.c | 66 net/netfilter/nf_conntrack_proto.c | 142 net/netfilter/nf_conntrack_proto_generic.c | 5 net/netfilter/nf_conntrack_proto_sctp.c | 9 net/netfilter/nf_conntrack_proto_tcp.c | 88 net/netfilter/nf_conntrack_proto_udp.c | 5 net/netfilter/nf_conntrack_standalone.c | 11 net/netfilter/nfnetlink.c | 197 - net/netfilter/nfnetlink_log.c | 108 net/netfilter/nfnetlink_queue.c | 20 net/netfilter/x_tables.c | 26 net/netfilter/xt_CONNMARK.c | 32 net/netfilter/xt_CONNSECMARK.c | 18 net/netfilter/xt_DSCP.c | 10 net/netfilter/xt_NOTRACK.c | 4 net/netfilter/xt_TCPMSS.c | 12 net/netfilter/xt_connbytes.c | 35 net/netfilter/xt_connmark.c | 17 net/netfilter/xt_conntrack.c | 110 net/netfilter/xt_dscp.c | 6 net/netfilter/xt_hashlimit.c | 14 net/netfilter/xt_helper.c | 60 net/netfilter/xt_length.c | 5 net/netfilter/xt_limit.c | 7 net/netfilter/xt_mac.c | 4 net/netfilter/xt_pkttype.c | 2 net/netfilter/xt_realm.c | 2 net/netfilter/xt_state.c | 4 net/netlink/af_netlink.c | 90 net/netlink/attr.c | 5 net/netlink/genetlink.c | 66 net/netrom/af_netrom.c | 115 net/netrom/nr_dev.c | 4 net/netrom/nr_in.c | 6 net/netrom/nr_loopback.c | 4 net/netrom/nr_out.c | 8 net/netrom/nr_subr.c | 4 net/packet/af_packet.c | 94 net/rose/af_rose.c | 70 net/rose/rose_loopback.c | 2 net/rose/rose_route.c | 2 net/rxrpc/connection.c | 6 net/rxrpc/main.c | 2 net/rxrpc/transport.c | 8 net/sched/Kconfig | 56 net/sched/act_api.c | 81 net/sched/act_gact.c | 5 net/sched/act_ipt.c | 5 net/sched/act_mirred.c | 5 net/sched/act_pedit.c | 7 net/sched/act_police.c | 34 net/sched/act_simple.c | 5 net/sched/cls_api.c | 36 net/sched/cls_basic.c | 7 net/sched/cls_fw.c | 7 net/sched/cls_route.c | 11 net/sched/cls_rsvp.c | 1 net/sched/cls_rsvp.h | 12 net/sched/cls_rsvp6.c | 1 net/sched/cls_tcindex.c | 9 net/sched/cls_u32.c | 13 net/sched/em_u32.c | 2 net/sched/ematch.c | 17 net/sched/sch_api.c | 227 - net/sched/sch_atm.c | 28 net/sched/sch_cbq.c | 207 - net/sched/sch_dsmark.c | 22 net/sched/sch_generic.c | 35 net/sched/sch_hfsc.c | 109 net/sched/sch_htb.c | 130 net/sched/sch_ingress.c | 27 net/sched/sch_netem.c | 110 net/sched/sch_prio.c | 14 net/sched/sch_sfq.c | 9 net/sched/sch_tbf.c | 47 net/sched/sch_teql.c | 2 net/sctp/associola.c | 14 net/sctp/debug.c | 5 net/sctp/input.c | 51 net/sctp/inqueue.c | 8 net/sctp/ipv6.c | 36 net/sctp/output.c | 2 net/sctp/outqueue.c | 12 net/sctp/protocol.c | 20 net/sctp/sm_make_chunk.c | 12 net/sctp/sm_sideeffect.c | 16 net/sctp/sm_statefuns.c | 30 net/sctp/sm_statetable.c | 2 net/sctp/socket.c | 267 + net/sctp/transport.c | 2 net/sctp/ulpevent.c | 49 net/sctp/ulpqueue.c | 173 net/socket.c | 33 net/sunrpc/socklib.c | 2 net/sunrpc/svcsock.c | 10 net/tipc/config.c | 2 net/tipc/eth_media.c | 8 net/tipc/link.c | 48 net/tipc/msg.h | 7 net/tipc/netlink.c | 2 net/tipc/port.c | 8 net/tipc/socket.c | 2 net/unix/af_unix.c | 2 net/wanrouter/wanmain.c | 6 net/wireless/Kconfig | 16 net/wireless/Makefile | 3 net/wireless/core.c | 209 + net/wireless/core.h | 49 net/wireless/sysfs.c | 80 net/wireless/sysfs.h | 9 net/x25/af_x25.c | 22 net/x25/x25_dev.c | 4 net/x25/x25_in.c | 14 net/x25/x25_out.c | 6 net/xfrm/xfrm_algo.c | 169 net/xfrm/xfrm_input.c | 6 net/xfrm/xfrm_policy.c | 14 net/xfrm/xfrm_state.c | 50 net/xfrm/xfrm_user.c | 122 security/selinux/hooks.c | 6 security/selinux/netlink.c | 4 917 files changed, 12605 insertions(+), 29032 deletions(-) diff -puN CREDITS~git-net CREDITS --- a/CREDITS~git-net +++ a/CREDITS @@ -317,6 +317,12 @@ S: 2322 37th Ave SW S: Seattle, Washington 98126-2010 S: USA +N: Johannes Berg +E: johannes@sipsolutions.net +W: http://johannes.sipsolutions.net/ +P: 1024D/9AB78CA5 AD02 0176 4E29 C137 1DF6 08D2 FC44 CF86 9AB7 8CA5 +D: powerpc & 802.11 hacker + N: Stephen R. van den Berg (AKA BuGless) E: berg@pool.informatik.rwth-aachen.de D: General kernel, gcc, and libc hacker @@ -2286,14 +2292,14 @@ S: D-90453 Nuernberg S: Germany N: Arnaldo Carvalho de Melo -E: acme@mandriva.com E: acme@ghostprotocols.net +E: arnaldo.melo@gmail.com +E: acme@redhat.com W: http://oops.ghostprotocols.net:81/blog/ P: 1024D/9224DF01 D5DF E3BB E3C8 BCBB F8AD 841A B6AB 4681 9224 DF01 D: IPX, LLC, DCCP, cyc2x, wl3501_cs, net/ hacks -S: Mandriva -S: R. Tocantins, 89 - Cristo Rei -S: 80050-430 - Curitiba - Paraná +S: R. Brasílio Itiberê, 4270/1010 - Água Verde +S: 80240-060 - Curitiba - Paraná S: Brazil N: Karsten Merker diff -puN Documentation/feature-removal-schedule.txt~git-net Documentation/feature-removal-schedule.txt --- a/Documentation/feature-removal-schedule.txt~git-net +++ a/Documentation/feature-removal-schedule.txt @@ -206,15 +206,6 @@ Who: Adrian Bunk --------------------------- -What: IPv4 only connection tracking/NAT/helpers -When: 2.6.22 -Why: The new layer 3 independant connection tracking replaces the old - IPv4 only version. After some stabilization of the new code the - old one will be removed. -Who: Patrick McHardy - ---------------------------- - What: ACPI hooks (X86_SPEEDSTEP_CENTRINO_ACPI) in speedstep-centrino driver When: December 2006 Why: Speedstep-centrino driver with ACPI hooks and acpi-cpufreq driver are @@ -289,18 +280,6 @@ Who: Richard Purdie --------------------------- -What: Wireless extensions over netlink (CONFIG_NET_WIRELESS_RTNETLINK) -When: with the merge of wireless-dev, 2.6.22 or later -Why: The option/code is - * not enabled on most kernels - * not required by any userspace tools (except an experimental one, - and even there only for some parts, others use ioctl) - * pointless since wext is no longer evolving and the ioctl - interface needs to be kept -Who: Johannes Berg - ---------------------------- - What: i8xx_tco watchdog driver When: in 2.6.22 Why: the i8xx_tco watchdog driver has been replaced by the iTCO_wdt diff -puN Documentation/filesystems/proc.txt~git-net Documentation/filesystems/proc.txt --- a/Documentation/filesystems/proc.txt~git-net +++ a/Documentation/filesystems/proc.txt @@ -1421,6 +1421,15 @@ fewer messages that will be written. Mes be dropped. The default settings limit warning messages to one every five seconds. +warnings +-------- + +This controls console messages from the networking stack that can occur because +of problems on the network like duplicate address or bad checksums. Normally, +this should be enabled, but if the problem persists the messages can be +disabled. + + netdev_max_backlog ------------------ diff -puN Documentation/networking/dccp.txt~git-net Documentation/networking/dccp.txt --- a/Documentation/networking/dccp.txt~git-net +++ a/Documentation/networking/dccp.txt @@ -57,6 +57,16 @@ DCCP_SOCKOPT_SEND_CSCOV is for the recei coverage value are also acceptable. The higher the number, the more restrictive this setting (see [RFC 4340, sec. 9.2.1]). +The following two options apply to CCID 3 exclusively and are getsockopt()-only. +In either case, a TFRC info struct (defined in ) is returned. +DCCP_SOCKOPT_CCID_RX_INFO + Returns a `struct tfrc_rx_info' in optval; the buffer for optval and + optlen must be set to at least sizeof(struct tfrc_rx_info). +DCCP_SOCKOPT_CCID_TX_INFO + Returns a `struct tfrc_tx_info' in optval; the buffer for optval and + optlen must be set to at least sizeof(struct tfrc_tx_info). + + Sysctl variables ================ Several DCCP default parameters can be managed by the following sysctls diff -puN Documentation/networking/ip-sysctl.txt~git-net Documentation/networking/ip-sysctl.txt --- a/Documentation/networking/ip-sysctl.txt~git-net +++ a/Documentation/networking/ip-sysctl.txt @@ -179,11 +179,31 @@ tcp_fin_timeout - INTEGER because they eat maximum 1.5K of memory, but they tend to live longer. Cf. tcp_max_orphans. -tcp_frto - BOOLEAN +tcp_frto - INTEGER Enables F-RTO, an enhanced recovery algorithm for TCP retransmission timeouts. It is particularly beneficial in wireless environments where packet loss is typically due to random radio interference - rather than intermediate router congestion. + rather than intermediate router congestion. If set to 1, basic + version is enabled. 2 enables SACK enhanced F-RTO, which is + EXPERIMENTAL. The basic version can be used also when SACK is + enabled for a flow through tcp_sack sysctl. + +tcp_frto_response - INTEGER + When F-RTO has detected that a TCP retransmission timeout was + spurious (i.e, the timeout would have been avoided had TCP set a + longer retransmission timeout), TCP has several options what to do + next. Possible values are: + 0 Rate halving based; a smooth and conservative response, + results in halved cwnd and ssthresh after one RTT + 1 Very conservative response; not recommended because even + though being valid, it interacts poorly with the rest of + Linux TCP, halves cwnd and ssthresh immediately + 2 Aggressive response; undoes congestion control measures + that are now known to be unnecessary (ignoring the + possibility of a lost retransmission that would require + TCP to be more cautious), cwnd and ssthresh are restored + to the values prior timeout + Default: 0 (rate halving based) tcp_keepalive_time - INTEGER How often TCP sends out keepalive messages when keepalive is enabled. @@ -986,7 +1006,12 @@ bridge-nf-call-ip6tables - BOOLEAN Default: 1 bridge-nf-filter-vlan-tagged - BOOLEAN - 1 : pass bridged vlan-tagged ARP/IP traffic to arptables/iptables. + 1 : pass bridged vlan-tagged ARP/IP/IPv6 traffic to {arp,ip,ip6}tables. + 0 : disable this. + Default: 1 + +bridge-nf-filter-pppoe-tagged - BOOLEAN + 1 : pass bridged pppoe-tagged IP/IPv6 traffic to {ip,ip6}tables. 0 : disable this. Default: 1 diff -puN MAINTAINERS~git-net MAINTAINERS --- a/MAINTAINERS~git-net +++ a/MAINTAINERS @@ -390,7 +390,7 @@ S: Maintained APPLETALK NETWORK LAYER P: Arnaldo Carvalho de Melo -M: acme@conectiva.com.br +M: acme@ghostprotocols.net S: Maintained ARC FRAMEBUFFER DRIVER @@ -662,6 +662,7 @@ S: Supported ATMEL WIRELESS DRIVER P: Simon Kelley M: simon@thekelleys.org.uk +L: linux-wireless@vger.kernel.org W: http://www.thekelleys.org.uk/atmel W: http://atmelwlandriver.sourceforge.net/ S: Maintained @@ -717,6 +718,7 @@ P: Larry Finger M: Larry.Finger@lwfinger.net P: Stefano Brivio M: st3@riseup.net +L: linux-wireless@vger.kernel.org W: http://bcm43xx.berlios.de/ S: Maintained @@ -911,6 +913,12 @@ M: maxextreme@gmail.com L: linux-kernel@vger.kernel.org S: Maintained +CFG80211 and NL80211 +P: Johannes Berg +M: johannes@sipsolutions.net +L: linux-wireless@vger.kernel.org +S: Maintained + COMMON INTERNET FILE SYSTEM (CIFS) P: Steve French M: sfrench@samba.org @@ -1059,9 +1067,8 @@ S: Maintained CYCLADES 2X SYNC CARD DRIVER P: Arnaldo Carvalho de Melo -M: acme@conectiva.com.br -W: http://advogato.org/person/acme -L: cycsyn-devel@bazar.conectiva.com.br +M: acme@ghostprotocols.net +W: http://oops.ghostprotocols.net:81/blog S: Maintained CYCLADES ASYNC MUX DRIVER @@ -1102,7 +1109,7 @@ S: Maintained DCCP PROTOCOL P: Arnaldo Carvalho de Melo -M: acme@mandriva.com +M: acme@ghostprotocols.net L: dccp@vger.kernel.org W: http://linux-net.osdl.org/index.php/DCCP S: Maintained @@ -1588,6 +1595,7 @@ S: Supported HOST AP DRIVER P: Jouni Malinen M: jkmaline@cc.hut.fi +L: linux-wireless@vger.kernel.org L: hostap@shmoo.com W: http://hostap.epitest.fi/ S: Maintained @@ -1854,6 +1862,7 @@ P: Yi Zhu M: yi.zhu@intel.com P: James Ketrenos M: jketreno@linux.intel.com +L: linux-wireless@vger.kernel.org L: ipw2100-devel@lists.sourceforge.net L: http://lists.sourceforge.net/mailman/listinfo/ipw2100-devel W: http://ipw2100.sourceforge.net @@ -1864,6 +1873,7 @@ P: Yi Zhu M: yi.zhu@intel.com P: James Ketrenos M: jketreno@linux.intel.com +L: linux-wireless@vger.kernel.org L: ipw2100-devel@lists.sourceforge.net L: http://lists.sourceforge.net/mailman/listinfo/ipw2100-devel W: http://ipw2200.sourceforge.net @@ -1895,7 +1905,7 @@ S: Supported IPX NETWORK LAYER P: Arnaldo Carvalho de Melo -M: acme@conectiva.com.br +M: acme@ghostprotocols.net L: netdev@vger.kernel.org S: Maintained @@ -2132,7 +2142,7 @@ S: Supported LLC (802.2) P: Arnaldo Carvalho de Melo -M: acme@conectiva.com.br +M: acme@ghostprotocols.net S: Maintained LINUX FOR 64BIT POWERPC @@ -2589,6 +2599,7 @@ P: Pavel Roskin M: proski@gnu.org P: David Gibson M: hermes@gibson.dropbear.id.au +L: linux-wireless@vger.kernel.org L: orinoco-users@lists.sourceforge.net L: orinoco-devel@lists.sourceforge.net W: http://www.nongnu.org/orinoco/ @@ -2768,7 +2779,7 @@ S: Supported PRISM54 WIRELESS DRIVER P: Prism54 Development Team M: developers@islsm.org -L: netdev@vger.kernel.org +L: linux-wireless@vger.kernel.org W: http://prism54.org S: Maintained @@ -2839,7 +2850,7 @@ S: Maintained RAYLINK/WEBGEAR 802.11 WIRELESS LAN DRIVER P: Corey Thomas M: corey@world.std.com -L: linux-kernel@vger.kernel.org +L: linux-wireless@vger.kernel.org S: Maintained RANDOM NUMBER DRIVER @@ -3107,7 +3118,7 @@ M: josejx@gentoo.org P: Daniel Drake M: dsd@gentoo.org W: http://softmac.sipsolutions.net/ -L: netdev@vger.kernel.org +L: linux-wireless@vger.kernel.org S: Maintained SOFTWARE RAID (Multiple Disks) SUPPORT @@ -3834,6 +3845,7 @@ S: Maintained WAVELAN NETWORK DRIVER & WIRELESS EXTENSIONS P: Jean Tourrilhes M: jt@hpl.hp.com +L: linux-wireless@vger.kernel.org W: http://www.hpl.hp.com/personal/Jean_Tourrilhes/Linux/ S: Maintained @@ -3850,8 +3862,9 @@ S: Maintained WL3501 WIRELESS PCMCIA CARD DRIVER P: Arnaldo Carvalho de Melo -M: acme@conectiva.com.br -W: http://advogato.org/person/acme +M: acme@ghostprotocols.net +L: linux-wireless@vger.kernel.org +W: http://oops.ghostprotocols.net:81/blog S: Maintained X.25 NETWORK LAYER @@ -3914,6 +3927,7 @@ M: dsd@gentoo.org P: Ulrich Kunitz M: kune@deine-taler.de W: http://zd1211.ath.cx/wiki/DriverRewrite +L: linux-wireless@vger.kernel.org L: zd1211-devs@lists.sourceforge.net (subscribers-only) S: Maintained diff -puN arch/ia64/hp/sim/simeth.c~git-net arch/ia64/hp/sim/simeth.c --- a/arch/ia64/hp/sim/simeth.c~git-net +++ a/arch/ia64/hp/sim/simeth.c @@ -427,7 +427,6 @@ make_new_skb(struct net_device *dev) printk(KERN_NOTICE "%s: memory squeeze. dropping packet.\n", dev->name); return NULL; } - nskb->dev = dev; skb_reserve(nskb, 2); /* Align IP on 16 byte boundaries */ @@ -474,7 +473,7 @@ simeth_rx(struct net_device *dev) * XXX Fix me * Should really do a csum+copy here */ - memcpy(skb->data, frame, len); + skb_copy_to_linear_data(skb, frame, len); #endif skb->protocol = eth_type_trans(skb, dev); diff -puN arch/ia64/sn/kernel/xpnet.c~git-net arch/ia64/sn/kernel/xpnet.c --- a/arch/ia64/sn/kernel/xpnet.c~git-net +++ a/arch/ia64/sn/kernel/xpnet.c @@ -233,7 +233,7 @@ xpnet_receive(partid_t partid, int chann "%lu)\n", skb->data, &msg->data, (size_t) msg->embedded_bytes); - memcpy(skb->data, &msg->data, (size_t) msg->embedded_bytes); + skb_copy_to_linear_data(skb, &msg->data, (size_t)msg->embedded_bytes); } else { dev_dbg(xpnet, "transferring buffer to the skb->data area;\n\t" "bte_copy(0x%p, 0x%p, %hu)\n", (void *)msg->buf_pa, @@ -264,17 +264,16 @@ xpnet_receive(partid_t partid, int chann dev_dbg(xpnet, "head=0x%p skb->data=0x%p skb->tail=0x%p " "skb->end=0x%p skb->len=%d\n", (void *) skb->head, - (void *) skb->data, (void *) skb->tail, (void *) skb->end, + (void *)skb->data, skb_tail_pointer(skb), skb_end_pointer(skb), skb->len); - skb->dev = xpnet_device; skb->protocol = eth_type_trans(skb, xpnet_device); skb->ip_summed = CHECKSUM_UNNECESSARY; dev_dbg(xpnet, "passing skb to network layer; \n\tskb->head=0x%p " "skb->data=0x%p skb->tail=0x%p skb->end=0x%p skb->len=%d\n", - (void *) skb->head, (void *) skb->data, (void *) skb->tail, - (void *) skb->end, skb->len); + (void *)skb->head, (void *)skb->data, skb_tail_pointer(skb), + skb_end_pointer(skb), skb->len); xpnet_device->last_rx = jiffies; @@ -476,7 +475,7 @@ xpnet_dev_hard_start_xmit(struct sk_buff dev_dbg(xpnet, ">skb->head=0x%p skb->data=0x%p skb->tail=0x%p " "skb->end=0x%p skb->len=%d\n", (void *) skb->head, - (void *) skb->data, (void *) skb->tail, (void *) skb->end, + (void *)skb->data, skb_tail_pointer(skb), skb_end_pointer(skb), skb->len); @@ -498,7 +497,7 @@ xpnet_dev_hard_start_xmit(struct sk_buff /* get the beginning of the first cacheline and end of last */ start_addr = ((u64) skb->data & ~(L1_CACHE_BYTES - 1)); - end_addr = L1_CACHE_ALIGN((u64) skb->tail); + end_addr = L1_CACHE_ALIGN((u64)skb_tail_pointer(skb)); /* calculate how many bytes to embed in the XPC message */ embedded_bytes = 0; @@ -567,14 +566,15 @@ xpnet_dev_hard_start_xmit(struct sk_buff msg->version = XPNET_VERSION_EMBED; dev_dbg(xpnet, "calling memcpy(0x%p, 0x%p, 0x%lx)\n", &msg->data, skb->data, (size_t) embedded_bytes); - memcpy(&msg->data, skb->data, (size_t) embedded_bytes); + skb_copy_from_linear_data(skb, &msg->data, + (size_t)embedded_bytes); } else { msg->version = XPNET_VERSION; } msg->magic = XPNET_MAGIC; msg->size = end_addr - start_addr; msg->leadin_ignore = (u64) skb->data - start_addr; - msg->tailout_ignore = end_addr - (u64) skb->tail; + msg->tailout_ignore = end_addr - (u64)skb_tail_pointer(skb); msg->buf_pa = __pa(start_addr); dev_dbg(xpnet, "sending XPC message to %d:%d\nmsg->buf_pa=" diff -puN arch/ppc/8260_io/enet.c~git-net arch/ppc/8260_io/enet.c --- a/arch/ppc/8260_io/enet.c~git-net +++ a/arch/ppc/8260_io/enet.c @@ -477,7 +477,6 @@ for (;;) { cep->stats.rx_dropped++; } else { - skb->dev = dev; skb_put(skb,pkt_len-4); /* Make room */ eth_copy_and_sum(skb, (unsigned char *)__va(bdp->cbd_bufaddr), diff -puN arch/ppc/8260_io/fcc_enet.c~git-net arch/ppc/8260_io/fcc_enet.c --- a/arch/ppc/8260_io/fcc_enet.c~git-net +++ a/arch/ppc/8260_io/fcc_enet.c @@ -734,7 +734,6 @@ for (;;) { cep->stats.rx_dropped++; } else { - skb->dev = dev; skb_put(skb,pkt_len); /* Make room */ eth_copy_and_sum(skb, (unsigned char *)__va(bdp->cbd_bufaddr), diff -puN arch/ppc/8xx_io/enet.c~git-net arch/ppc/8xx_io/enet.c --- a/arch/ppc/8xx_io/enet.c~git-net +++ a/arch/ppc/8xx_io/enet.c @@ -506,7 +506,6 @@ for (;;) { cep->stats.rx_dropped++; } else { - skb->dev = dev; skb_put(skb,pkt_len-4); /* Make room */ eth_copy_and_sum(skb, cep->rx_vaddr[bdp - cep->rx_bd_base], diff -puN arch/ppc/8xx_io/fec.c~git-net arch/ppc/8xx_io/fec.c --- a/arch/ppc/8xx_io/fec.c~git-net +++ a/arch/ppc/8xx_io/fec.c @@ -724,7 +724,6 @@ while (!(bdp->cbd_sc & BD_ENET_RX_EMPTY) printk("%s: Memory squeeze, dropping packet.\n", dev->name); fep->stats.rx_dropped++; } else { - skb->dev = dev; skb_put(skb,pkt_len-4); /* Make room */ eth_copy_and_sum(skb, data, pkt_len-4, 0); skb->protocol=eth_type_trans(skb,dev); diff -puN arch/s390/appldata/appldata_net_sum.c~git-net arch/s390/appldata/appldata_net_sum.c --- a/arch/s390/appldata/appldata_net_sum.c~git-net +++ a/arch/s390/appldata/appldata_net_sum.c @@ -108,10 +108,10 @@ static void appldata_get_net_sum_data(vo collisions = 0; read_lock(&dev_base_lock); for (dev = dev_base; dev != NULL; dev = dev->next) { - if (dev->get_stats == NULL) { + stats = dev->get_stats(dev); + if (stats == NULL) { continue; } - stats = dev->get_stats(dev); rx_packets += stats->rx_packets; tx_packets += stats->tx_packets; rx_bytes += stats->rx_bytes; diff -puN arch/s390/lib/Makefile~git-net arch/s390/lib/Makefile --- a/arch/s390/lib/Makefile~git-net +++ a/arch/s390/lib/Makefile @@ -5,6 +5,6 @@ EXTRA_AFLAGS := -traditional lib-y += delay.o string.o uaccess_std.o uaccess_pt.o qrnnd.o -lib-$(CONFIG_32BIT) += div64.o +obj-$(CONFIG_32BIT) += div64.o lib-$(CONFIG_64BIT) += uaccess_mvcos.o lib-$(CONFIG_SMP) += spinlock.o diff -puN arch/s390/lib/div64.c~git-net arch/s390/lib/div64.c --- a/arch/s390/lib/div64.c~git-net +++ a/arch/s390/lib/div64.c @@ -147,5 +147,3 @@ uint32_t __div64_32(uint64_t *n, uint32_ } #endif /* MARCH_G5 */ - -EXPORT_SYMBOL(__div64_32); diff -puN arch/um/drivers/daemon_kern.c~git-net arch/um/drivers/daemon_kern.c --- a/arch/um/drivers/daemon_kern.c~git-net +++ a/arch/um/drivers/daemon_kern.c @@ -46,7 +46,7 @@ static int daemon_read(int fd, struct sk { *skb = ether_adjust_skb(*skb, ETH_HEADER_OTHER); if(*skb == NULL) return(-ENOMEM); - return(net_recvfrom(fd, (*skb)->mac.raw, + return(net_recvfrom(fd, skb_mac_header(*skb), (*skb)->dev->mtu + ETH_HEADER_OTHER)); } diff -puN arch/um/drivers/mcast_kern.c~git-net arch/um/drivers/mcast_kern.c --- a/arch/um/drivers/mcast_kern.c~git-net +++ a/arch/um/drivers/mcast_kern.c @@ -50,7 +50,7 @@ static int mcast_read(int fd, struct sk_ { *skb = ether_adjust_skb(*skb, ETH_HEADER_OTHER); if(*skb == NULL) return(-ENOMEM); - return(net_recvfrom(fd, (*skb)->mac.raw, + return(net_recvfrom(fd, skb_mac_header(*skb), (*skb)->dev->mtu + ETH_HEADER_OTHER)); } diff -puN arch/um/drivers/net_kern.c~git-net arch/um/drivers/net_kern.c --- a/arch/um/drivers/net_kern.c~git-net +++ a/arch/um/drivers/net_kern.c @@ -55,7 +55,7 @@ static int uml_net_rx(struct net_device skb->dev = dev; skb_put(skb, dev->mtu); - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); pkt_len = (*lp->read)(lp->fd, &skb, lp); if (pkt_len > 0) { diff -puN arch/um/drivers/pcap_kern.c~git-net arch/um/drivers/pcap_kern.c --- a/arch/um/drivers/pcap_kern.c~git-net +++ a/arch/um/drivers/pcap_kern.c @@ -36,7 +36,7 @@ static int pcap_read(int fd, struct sk_b { *skb = ether_adjust_skb(*skb, ETH_HEADER_OTHER); if(*skb == NULL) return(-ENOMEM); - return(pcap_user_read(fd, (*skb)->mac.raw, + return(pcap_user_read(fd, skb_mac_header(*skb), (*skb)->dev->mtu + ETH_HEADER_OTHER, (struct pcap_data *) &lp->user)); } diff -puN arch/um/drivers/slip_kern.c~git-net arch/um/drivers/slip_kern.c --- a/arch/um/drivers/slip_kern.c~git-net +++ a/arch/um/drivers/slip_kern.c @@ -49,7 +49,7 @@ static unsigned short slip_protocol(stru static int slip_read(int fd, struct sk_buff **skb, struct uml_net_private *lp) { - return(slip_user_read(fd, (*skb)->mac.raw, (*skb)->dev->mtu, + return(slip_user_read(fd, skb_mac_header(*skb), (*skb)->dev->mtu, (struct slip_data *) &lp->user)); } diff -puN arch/um/drivers/slirp_kern.c~git-net arch/um/drivers/slirp_kern.c --- a/arch/um/drivers/slirp_kern.c~git-net +++ a/arch/um/drivers/slirp_kern.c @@ -53,7 +53,7 @@ static unsigned short slirp_protocol(str static int slirp_read(int fd, struct sk_buff **skb, struct uml_net_private *lp) { - return(slirp_user_read(fd, (*skb)->mac.raw, (*skb)->dev->mtu, + return(slirp_user_read(fd, skb_mac_header(*skb), (*skb)->dev->mtu, (struct slirp_data *) &lp->user)); } diff -puN arch/um/os-Linux/drivers/ethertap_kern.c~git-net arch/um/os-Linux/drivers/ethertap_kern.c --- a/arch/um/os-Linux/drivers/ethertap_kern.c~git-net +++ a/arch/um/os-Linux/drivers/ethertap_kern.c @@ -43,7 +43,7 @@ static int etap_read(int fd, struct sk_b *skb = ether_adjust_skb(*skb, ETH_HEADER_ETHERTAP); if(*skb == NULL) return(-ENOMEM); - len = net_recvfrom(fd, (*skb)->mac.raw, + len = net_recvfrom(fd, skb_mac_header(*skb), (*skb)->dev->mtu + 2 * ETH_HEADER_ETHERTAP); if(len <= 0) return(len); skb_pull(*skb, 2); diff -puN arch/um/os-Linux/drivers/tuntap_kern.c~git-net arch/um/os-Linux/drivers/tuntap_kern.c --- a/arch/um/os-Linux/drivers/tuntap_kern.c~git-net +++ a/arch/um/os-Linux/drivers/tuntap_kern.c @@ -43,7 +43,7 @@ static int tuntap_read(int fd, struct sk { *skb = ether_adjust_skb(*skb, ETH_HEADER_OTHER); if(*skb == NULL) return(-ENOMEM); - return(net_read(fd, (*skb)->mac.raw, + return(net_read(fd, skb_mac_header(*skb), (*skb)->dev->mtu + ETH_HEADER_OTHER)); } diff -puN arch/xtensa/platform-iss/network.c~git-net arch/xtensa/platform-iss/network.c --- a/arch/xtensa/platform-iss/network.c~git-net +++ a/arch/xtensa/platform-iss/network.c @@ -386,7 +386,7 @@ static int iss_net_rx(struct net_device /* Setup skb */ skb->dev = dev; - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); pkt_len = lp->tp.read(lp, &skb); skb_put(skb, pkt_len); diff -puN drivers/atm/ambassador.c~git-net drivers/atm/ambassador.c --- a/drivers/atm/ambassador.c~git-net +++ a/drivers/atm/ambassador.c @@ -821,7 +821,7 @@ static inline void fill_rx_pool (amb_dev } // cast needed as there is no %? for pointer differences PRINTD (DBG_SKB, "allocated skb at %p, head %p, area %li", - skb, skb->head, (long) (skb->end - skb->head)); + skb, skb->head, (long) (skb_end_pointer(skb) - skb->head)); rx.handle = virt_to_bus (skb); rx.host_address = cpu_to_be32 (virt_to_bus (skb->data)); if (rx_give (dev, &rx, pool)) diff -puN drivers/atm/atmtcp.c~git-net drivers/atm/atmtcp.c --- a/drivers/atm/atmtcp.c~git-net +++ a/drivers/atm/atmtcp.c @@ -221,7 +221,7 @@ static int atmtcp_v_send(struct atm_vcc hdr->vpi = htons(vcc->vpi); hdr->vci = htons(vcc->vci); hdr->length = htonl(skb->len); - memcpy(skb_put(new_skb,skb->len),skb->data,skb->len); + skb_copy_from_linear_data(skb, skb_put(new_skb, skb->len), skb->len); if (vcc->pop) vcc->pop(vcc,skb); else dev_kfree_skb(skb); out_vcc->push(out_vcc,new_skb); @@ -310,7 +310,7 @@ static int atmtcp_c_send(struct atm_vcc goto done; } __net_timestamp(new_skb); - memcpy(skb_put(new_skb,skb->len),skb->data,skb->len); + skb_copy_from_linear_data(skb, skb_put(new_skb, skb->len), skb->len); out_vcc->push(out_vcc,new_skb); atomic_inc(&vcc->stats->tx); atomic_inc(&out_vcc->stats->rx); diff -puN drivers/atm/eni.c~git-net drivers/atm/eni.c --- a/drivers/atm/eni.c~git-net +++ a/drivers/atm/eni.c @@ -536,7 +536,7 @@ static int rx_aal0(struct atm_vcc *vcc) return 0; } skb_put(skb,length); - skb_set_timestamp(skb, &eni_vcc->timestamp); + skb->tstamp = eni_vcc->timestamp; DPRINTK("got len %ld\n",length); if (do_rx_dma(vcc,skb,1,length >> 2,length >> 2)) return 1; eni_vcc->rxing++; @@ -701,7 +701,7 @@ static void get_service(struct atm_dev * DPRINTK("Grr, servicing VCC %ld twice\n",vci); continue; } - do_gettimeofday(&ENI_VCC(vcc)->timestamp); + ENI_VCC(vcc)->timestamp = ktime_get_real(); ENI_VCC(vcc)->next = NULL; if (vcc->qos.rxtp.traffic_class == ATM_CBR) { if (eni_dev->fast) diff -puN drivers/atm/eni.h~git-net drivers/atm/eni.h --- a/drivers/atm/eni.h~git-net +++ a/drivers/atm/eni.h @@ -59,7 +59,7 @@ struct eni_vcc { int rxing; /* number of pending PDUs */ int servicing; /* number of waiting VCs (0 or 1) */ int txing; /* number of pending TX bytes */ - struct timeval timestamp; /* for RX timing */ + ktime_t timestamp; /* for RX timing */ struct atm_vcc *next; /* next pending RX */ struct sk_buff *last; /* last PDU being DMAed (used to carry discard information) */ diff -puN drivers/atm/he.c~git-net drivers/atm/he.c --- a/drivers/atm/he.c~git-net +++ a/drivers/atm/he.c @@ -1901,13 +1901,13 @@ he_service_rbrq(struct he_dev *he_dev, i case ATM_AAL0: /* 2.10.1.5 raw cell receive */ skb->len = ATM_AAL0_SDU; - skb->tail = skb->data + skb->len; + skb_set_tail_pointer(skb, skb->len); break; case ATM_AAL5: /* 2.10.1.2 aal5 receive */ skb->len = AAL5_LEN(skb->data, he_vcc->pdu_len); - skb->tail = skb->data + skb->len; + skb_set_tail_pointer(skb, skb->len); #ifdef USE_CHECKSUM_HW if (vcc->vpi == 0 && vcc->vci >= ATM_NOT_RSV_VCI) { skb->ip_summed = CHECKSUM_COMPLETE; diff -puN drivers/atm/idt77252.c~git-net drivers/atm/idt77252.c --- a/drivers/atm/idt77252.c~git-net +++ a/drivers/atm/idt77252.c @@ -1065,7 +1065,8 @@ dequeue_rx(struct idt77252_dev *card, st vcc = vc->rx_vcc; pci_dma_sync_single_for_cpu(card->pcidev, IDT77252_PRV_PADDR(skb), - skb->end - skb->data, PCI_DMA_FROMDEVICE); + skb_end_pointer(skb) - skb->data, + PCI_DMA_FROMDEVICE); if ((vcc->qos.aal == ATM_AAL0) || (vcc->qos.aal == ATM_AAL34)) { @@ -1194,7 +1195,8 @@ dequeue_rx(struct idt77252_dev *card, st } pci_unmap_single(card->pcidev, IDT77252_PRV_PADDR(skb), - skb->end - skb->data, PCI_DMA_FROMDEVICE); + skb_end_pointer(skb) - skb->data, + PCI_DMA_FROMDEVICE); sb_pool_remove(card, skb); skb_trim(skb, len); @@ -1267,7 +1269,7 @@ idt77252_rx_raw(struct idt77252_dev *car tail = readl(SAR_REG_RAWCT); pci_dma_sync_single_for_cpu(card->pcidev, IDT77252_PRV_PADDR(queue), - queue->end - queue->head - 16, + skb_end_pointer(queue) - queue->head - 16, PCI_DMA_FROMDEVICE); while (head != tail) { @@ -1363,7 +1365,8 @@ drop: queue = card->raw_cell_head; pci_dma_sync_single_for_cpu(card->pcidev, IDT77252_PRV_PADDR(queue), - queue->end - queue->data, + (skb_end_pointer(queue) - + queue->data), PCI_DMA_FROMDEVICE); } else { card->raw_cell_head = NULL; @@ -1816,7 +1819,8 @@ push_rx_skb(struct idt77252_dev *card, s u32 handle; u32 addr; - skb->data = skb->tail = skb->head; + skb->data = skb->head; + skb_reset_tail_pointer(skb); skb->len = 0; skb_reserve(skb, 16); @@ -1835,7 +1839,6 @@ push_rx_skb(struct idt77252_dev *card, s skb_put(skb, SAR_FB_SIZE_3); break; default: - dev_kfree_skb(skb); return -1; } @@ -1874,7 +1877,7 @@ add_rx_skb(struct idt77252_dev *card, in } paddr = pci_map_single(card->pcidev, skb->data, - skb->end - skb->data, + skb_end_pointer(skb) - skb->data, PCI_DMA_FROMDEVICE); IDT77252_PRV_PADDR(skb) = paddr; @@ -1888,7 +1891,7 @@ add_rx_skb(struct idt77252_dev *card, in outunmap: pci_unmap_single(card->pcidev, IDT77252_PRV_PADDR(skb), - skb->end - skb->data, PCI_DMA_FROMDEVICE); + skb_end_pointer(skb) - skb->data, PCI_DMA_FROMDEVICE); handle = IDT77252_PRV_POOL(skb); card->sbpool[POOL_QUEUE(handle)].skb[POOL_INDEX(handle)] = NULL; @@ -1905,12 +1908,14 @@ recycle_rx_skb(struct idt77252_dev *card int err; pci_dma_sync_single_for_device(card->pcidev, IDT77252_PRV_PADDR(skb), - skb->end - skb->data, PCI_DMA_FROMDEVICE); + skb_end_pointer(skb) - skb->data, + PCI_DMA_FROMDEVICE); err = push_rx_skb(card, skb, POOL_QUEUE(handle)); if (err) { pci_unmap_single(card->pcidev, IDT77252_PRV_PADDR(skb), - skb->end - skb->data, PCI_DMA_FROMDEVICE); + skb_end_pointer(skb) - skb->data, + PCI_DMA_FROMDEVICE); sb_pool_remove(card, skb); dev_kfree_skb(skb); } @@ -3122,7 +3127,8 @@ deinit_card(struct idt77252_dev *card) if (skb) { pci_unmap_single(card->pcidev, IDT77252_PRV_PADDR(skb), - skb->end - skb->data, + (skb_end_pointer(skb) - + skb->data), PCI_DMA_FROMDEVICE); card->sbpool[i].skb[j] = NULL; dev_kfree_skb(skb); diff -puN drivers/atm/nicstar.c~git-net drivers/atm/nicstar.c --- a/drivers/atm/nicstar.c~git-net +++ a/drivers/atm/nicstar.c @@ -2208,7 +2208,7 @@ static void dequeue_rx(ns_dev *card, ns_ if (i == 1 && ns_rsqe_eopdu(rsqe)) *((u32 *) sb->data) |= 0x00000002; skb_put(sb, NS_AAL0_HEADER); - memcpy(sb->tail, cell, ATM_CELL_PAYLOAD); + memcpy(skb_tail_pointer(sb), cell, ATM_CELL_PAYLOAD); skb_put(sb, ATM_CELL_PAYLOAD); ATM_SKB(sb)->vcc = vcc; __net_timestamp(sb); @@ -2252,7 +2252,8 @@ static void dequeue_rx(ns_dev *card, ns_ vc->rx_iov = iovb; NS_SKB(iovb)->iovcnt = 0; iovb->len = 0; - iovb->tail = iovb->data = iovb->head; + iovb->data = iovb->head; + skb_reset_tail_pointer(iovb); NS_SKB(iovb)->vcc = vcc; /* IMPORTANT: a pointer to the sk_buff containing the small or large buffer is stored as iovec base, NOT a pointer to the @@ -2265,7 +2266,8 @@ static void dequeue_rx(ns_dev *card, ns_ recycle_iovec_rx_bufs(card, (struct iovec *) iovb->data, NS_MAX_IOVECS); NS_SKB(iovb)->iovcnt = 0; iovb->len = 0; - iovb->tail = iovb->data = iovb->head; + iovb->data = iovb->head; + skb_reset_tail_pointer(iovb); NS_SKB(iovb)->vcc = vcc; } iov = &((struct iovec *) iovb->data)[NS_SKB(iovb)->iovcnt++]; @@ -2393,7 +2395,7 @@ static void dequeue_rx(ns_dev *card, ns_ skb->destructor = ns_lb_destructor; #endif /* NS_USE_DESTRUCTORS */ skb_push(skb, NS_SMBUFSIZE); - memcpy(skb->data, sb->data, NS_SMBUFSIZE); + skb_copy_from_linear_data(sb, skb->data, NS_SMBUFSIZE); skb_put(skb, len - NS_SMBUFSIZE); ATM_SKB(skb)->vcc = vcc; __net_timestamp(skb); @@ -2477,7 +2479,7 @@ static void dequeue_rx(ns_dev *card, ns_ { /* Copy the small buffer to the huge buffer */ sb = (struct sk_buff *) iov->iov_base; - memcpy(hb->data, sb->data, iov->iov_len); + skb_copy_from_linear_data(sb, hb->data, iov->iov_len); skb_put(hb, iov->iov_len); remaining = len - iov->iov_len; iov++; @@ -2489,7 +2491,7 @@ static void dequeue_rx(ns_dev *card, ns_ { lb = (struct sk_buff *) iov->iov_base; tocopy = min_t(int, remaining, iov->iov_len); - memcpy(hb->tail, lb->data, tocopy); + skb_copy_from_linear_data(lb, skb_tail_pointer(hb), tocopy); skb_put(hb, tocopy); iov++; remaining -= tocopy; diff -puN drivers/block/aoe/aoe.h~git-net drivers/block/aoe/aoe.h --- a/drivers/block/aoe/aoe.h~git-net +++ a/drivers/block/aoe/aoe.h @@ -48,6 +48,15 @@ struct aoe_hdr { __be32 tag; }; +#ifdef __KERNEL__ +#include + +static inline struct aoe_hdr *aoe_hdr(const struct sk_buff *skb) +{ + return (struct aoe_hdr *)skb_mac_header(skb); +} +#endif + struct aoe_atahdr { unsigned char aflags; unsigned char errfeat; diff -puN drivers/block/aoe/aoecmd.c~git-net drivers/block/aoe/aoecmd.c --- a/drivers/block/aoe/aoecmd.c~git-net +++ a/drivers/block/aoe/aoecmd.c @@ -27,7 +27,8 @@ new_skb(ulong len) skb = alloc_skb(len, GFP_ATOMIC); if (skb) { - skb->nh.raw = skb->mac.raw = skb->data; + skb_reset_mac_header(skb); + skb_reset_network_header(skb); skb->protocol = __constant_htons(ETH_P_AOE); skb->priority = 0; skb->next = skb->prev = NULL; @@ -118,7 +119,7 @@ aoecmd_ata_rw(struct aoedev *d, struct f /* initialize the headers & frame */ skb = f->skb; - h = (struct aoe_hdr *) skb->mac.raw; + h = aoe_hdr(skb); ah = (struct aoe_atahdr *) (h+1); skb_put(skb, sizeof *h + sizeof *ah); memset(h, 0, skb->len); @@ -207,7 +208,7 @@ aoecmd_cfg_pkts(ushort aoemajor, unsigne skb->dev = ifp; if (sl_tail == NULL) sl_tail = skb; - h = (struct aoe_hdr *) skb->mac.raw; + h = aoe_hdr(skb); memset(h, 0, sizeof *h + sizeof *ch); memset(h->dst, 0xff, sizeof h->dst); @@ -300,7 +301,7 @@ rexmit(struct aoedev *d, struct frame *f aoechr_error(buf); skb = f->skb; - h = (struct aoe_hdr *) skb->mac.raw; + h = aoe_hdr(skb); ah = (struct aoe_atahdr *) (h+1); f->tag = n; h->tag = cpu_to_be32(n); @@ -529,7 +530,7 @@ aoecmd_ata_rsp(struct sk_buff *skb) char ebuf[128]; u16 aoemajor; - hin = (struct aoe_hdr *) skb->mac.raw; + hin = aoe_hdr(skb); aoemajor = be16_to_cpu(get_unaligned(&hin->major)); d = aoedev_by_aoeaddr(aoemajor, hin->minor); if (d == NULL) { @@ -561,7 +562,7 @@ aoecmd_ata_rsp(struct sk_buff *skb) calc_rttavg(d, tsince(f->tag)); ahin = (struct aoe_atahdr *) (hin+1); - hout = (struct aoe_hdr *) f->skb->mac.raw; + hout = aoe_hdr(f->skb); ahout = (struct aoe_atahdr *) (hout+1); buf = f->buf; @@ -695,7 +696,7 @@ aoecmd_ata_id(struct aoedev *d) /* initialize the headers & frame */ skb = f->skb; - h = (struct aoe_hdr *) skb->mac.raw; + h = aoe_hdr(skb); ah = (struct aoe_atahdr *) (h+1); skb_put(skb, sizeof *h + sizeof *ah); memset(h, 0, skb->len); @@ -726,7 +727,7 @@ aoecmd_cfg_rsp(struct sk_buff *skb) enum { MAXFRAMES = 16 }; u16 n; - h = (struct aoe_hdr *) skb->mac.raw; + h = aoe_hdr(skb); ch = (struct aoe_cfghdr *) (h+1); /* diff -puN drivers/block/aoe/aoenet.c~git-net drivers/block/aoe/aoenet.c --- a/drivers/block/aoe/aoenet.c~git-net +++ a/drivers/block/aoe/aoenet.c @@ -123,7 +123,7 @@ aoenet_rcv(struct sk_buff *skb, struct n goto exit; skb_push(skb, ETH_HLEN); /* (1) */ - h = (struct aoe_hdr *) skb->mac.raw; + h = aoe_hdr(skb); n = be32_to_cpu(get_unaligned(&h->tag)); if ((h->verfl & AOEFL_RSP) == 0 || (n & 1<<31)) goto exit; diff -puN drivers/bluetooth/bfusb.c~git-net drivers/bluetooth/bfusb.c --- a/drivers/bluetooth/bfusb.c~git-net +++ a/drivers/bluetooth/bfusb.c @@ -527,7 +527,7 @@ static int bfusb_send_frame(struct sk_bu buf[2] = (size == BFUSB_MAX_BLOCK_SIZE) ? 0 : size; memcpy(skb_put(nskb, 3), buf, 3); - memcpy(skb_put(nskb, size), skb->data + sent, size); + skb_copy_from_linear_data_offset(skb, sent, skb_put(nskb, size), size); sent += size; count -= size; diff -puN drivers/bluetooth/bluecard_cs.c~git-net drivers/bluetooth/bluecard_cs.c --- a/drivers/bluetooth/bluecard_cs.c~git-net +++ a/drivers/bluetooth/bluecard_cs.c @@ -461,20 +461,20 @@ static void bluecard_receive(bluecard_in switch (info->rx_state) { case RECV_WAIT_EVENT_HEADER: - eh = (struct hci_event_hdr *)(info->rx_skb->data); + eh = hci_event_hdr(info->rx_skb); info->rx_state = RECV_WAIT_DATA; info->rx_count = eh->plen; break; case RECV_WAIT_ACL_HEADER: - ah = (struct hci_acl_hdr *)(info->rx_skb->data); + ah = hci_acl_hdr(info->rx_skb); dlen = __le16_to_cpu(ah->dlen); info->rx_state = RECV_WAIT_DATA; info->rx_count = dlen; break; case RECV_WAIT_SCO_HEADER: - sh = (struct hci_sco_hdr *)(info->rx_skb->data); + sh = hci_sco_hdr(info->rx_skb); info->rx_state = RECV_WAIT_DATA; info->rx_count = sh->dlen; break; diff -puN drivers/bluetooth/bpa10x.c~git-net drivers/bluetooth/bpa10x.c --- a/drivers/bluetooth/bpa10x.c~git-net +++ a/drivers/bluetooth/bpa10x.c @@ -231,7 +231,7 @@ static void bpa10x_wakeup(struct bpa10x_ cr = (struct usb_ctrlrequest *) urb->setup_packet; cr->wLength = __cpu_to_le16(skb->len); - memcpy(urb->transfer_buffer, skb->data, skb->len); + skb_copy_from_linear_data(skb, urb->transfer_buffer, skb->len); urb->transfer_buffer_length = skb->len; err = usb_submit_urb(urb, GFP_ATOMIC); @@ -250,7 +250,7 @@ static void bpa10x_wakeup(struct bpa10x_ skb = skb_dequeue(&data->tx_queue); if (skb) { - memcpy(urb->transfer_buffer, skb->data, skb->len); + skb_copy_from_linear_data(skb, urb->transfer_buffer, skb->len); urb->transfer_buffer_length = skb->len; err = usb_submit_urb(urb, GFP_ATOMIC); diff -puN drivers/bluetooth/bt3c_cs.c~git-net drivers/bluetooth/bt3c_cs.c --- a/drivers/bluetooth/bt3c_cs.c~git-net +++ a/drivers/bluetooth/bt3c_cs.c @@ -303,20 +303,20 @@ static void bt3c_receive(bt3c_info_t *in switch (info->rx_state) { case RECV_WAIT_EVENT_HEADER: - eh = (struct hci_event_hdr *)(info->rx_skb->data); + eh = hci_event_hdr(info->rx_skb); info->rx_state = RECV_WAIT_DATA; info->rx_count = eh->plen; break; case RECV_WAIT_ACL_HEADER: - ah = (struct hci_acl_hdr *)(info->rx_skb->data); + ah = hci_acl_hdr(info->rx_skb); dlen = __le16_to_cpu(ah->dlen); info->rx_state = RECV_WAIT_DATA; info->rx_count = dlen; break; case RECV_WAIT_SCO_HEADER: - sh = (struct hci_sco_hdr *)(info->rx_skb->data); + sh = hci_sco_hdr(info->rx_skb); info->rx_state = RECV_WAIT_DATA; info->rx_count = sh->dlen; break; diff -puN drivers/bluetooth/btuart_cs.c~git-net drivers/bluetooth/btuart_cs.c --- a/drivers/bluetooth/btuart_cs.c~git-net +++ a/drivers/bluetooth/btuart_cs.c @@ -250,20 +250,20 @@ static void btuart_receive(btuart_info_t switch (info->rx_state) { case RECV_WAIT_EVENT_HEADER: - eh = (struct hci_event_hdr *)(info->rx_skb->data); + eh = hci_event_hdr(info->rx_skb); info->rx_state = RECV_WAIT_DATA; info->rx_count = eh->plen; break; case RECV_WAIT_ACL_HEADER: - ah = (struct hci_acl_hdr *)(info->rx_skb->data); + ah = hci_acl_hdr(info->rx_skb); dlen = __le16_to_cpu(ah->dlen); info->rx_state = RECV_WAIT_DATA; info->rx_count = dlen; break; case RECV_WAIT_SCO_HEADER: - sh = (struct hci_sco_hdr *)(info->rx_skb->data); + sh = hci_sco_hdr(info->rx_skb); info->rx_state = RECV_WAIT_DATA; info->rx_count = sh->dlen; break; diff -puN drivers/bluetooth/dtl1_cs.c~git-net drivers/bluetooth/dtl1_cs.c --- a/drivers/bluetooth/dtl1_cs.c~git-net +++ a/drivers/bluetooth/dtl1_cs.c @@ -425,7 +425,7 @@ static int dtl1_hci_send_frame(struct sk return -ENOMEM; skb_reserve(s, NSHL); - memcpy(skb_put(s, skb->len), skb->data, skb->len); + skb_copy_from_linear_data(skb, skb_put(s, skb->len), skb->len); if (skb->len & 0x0001) *skb_put(s, 1) = 0; /* PAD */ diff -puN drivers/bluetooth/hci_h4.c~git-net drivers/bluetooth/hci_h4.c --- a/drivers/bluetooth/hci_h4.c~git-net +++ a/drivers/bluetooth/hci_h4.c @@ -188,7 +188,7 @@ static int h4_recv(struct hci_uart *hu, continue; case H4_W4_EVENT_HDR: - eh = (struct hci_event_hdr *) h4->rx_skb->data; + eh = hci_event_hdr(h4->rx_skb); BT_DBG("Event header: evt 0x%2.2x plen %d", eh->evt, eh->plen); @@ -196,7 +196,7 @@ static int h4_recv(struct hci_uart *hu, continue; case H4_W4_ACL_HDR: - ah = (struct hci_acl_hdr *) h4->rx_skb->data; + ah = hci_acl_hdr(h4->rx_skb); dlen = __le16_to_cpu(ah->dlen); BT_DBG("ACL header: dlen %d", dlen); @@ -205,7 +205,7 @@ static int h4_recv(struct hci_uart *hu, continue; case H4_W4_SCO_HDR: - sh = (struct hci_sco_hdr *) h4->rx_skb->data; + sh = hci_sco_hdr(h4->rx_skb); BT_DBG("SCO header: dlen %d", sh->dlen); diff -puN drivers/char/pcmcia/synclink_cs.c~git-net drivers/char/pcmcia/synclink_cs.c --- a/drivers/char/pcmcia/synclink_cs.c~git-net +++ a/drivers/char/pcmcia/synclink_cs.c @@ -4169,7 +4169,7 @@ static int hdlcdev_xmit(struct sk_buff * netif_stop_queue(dev); /* copy data to device buffers */ - memcpy(info->tx_buf, skb->data, skb->len); + skb_copy_from_linear_data(skb, info->tx_buf, skb->len); info->tx_get = 0; info->tx_put = info->tx_count = skb->len; diff -puN drivers/char/random.c~git-net drivers/char/random.c --- a/drivers/char/random.c~git-net +++ a/drivers/char/random.c @@ -881,15 +881,15 @@ EXPORT_SYMBOL(get_random_bytes); */ static void init_std_data(struct entropy_store *r) { - struct timeval tv; + ktime_t now; unsigned long flags; spin_lock_irqsave(&r->lock, flags); r->entropy_count = 0; spin_unlock_irqrestore(&r->lock, flags); - do_gettimeofday(&tv); - add_entropy_words(r, (__u32 *)&tv, sizeof(tv)/4); + now = ktime_get_real(); + add_entropy_words(r, (__u32 *)&now, sizeof(now)/4); add_entropy_words(r, (__u32 *)utsname(), sizeof(*(utsname()))/4); } @@ -911,14 +911,12 @@ void rand_initialize_irq(int irq) return; /* - * If kmalloc returns null, we just won't use that entropy + * If kzalloc returns null, we just won't use that entropy * source. */ - state = kmalloc(sizeof(struct timer_rand_state), GFP_KERNEL); - if (state) { - memset(state, 0, sizeof(struct timer_rand_state)); + state = kzalloc(sizeof(struct timer_rand_state), GFP_KERNEL); + if (state) irq_timer_state[irq] = state; - } } #ifdef CONFIG_BLOCK @@ -927,14 +925,12 @@ void rand_initialize_disk(struct gendisk struct timer_rand_state *state; /* - * If kmalloc returns null, we just won't use that entropy + * If kzalloc returns null, we just won't use that entropy * source. */ - state = kmalloc(sizeof(struct timer_rand_state), GFP_KERNEL); - if (state) { - memset(state, 0, sizeof(struct timer_rand_state)); + state = kzalloc(sizeof(struct timer_rand_state), GFP_KERNEL); + if (state) disk->random = state; - } } #endif @@ -1469,7 +1465,6 @@ late_initcall(seqgen_init); __u32 secure_tcpv6_sequence_number(__be32 *saddr, __be32 *daddr, __be16 sport, __be16 dport) { - struct timeval tv; __u32 seq; __u32 hash[12]; struct keydata *keyptr = get_keyptr(); @@ -1485,8 +1480,7 @@ __u32 secure_tcpv6_sequence_number(__be3 seq = twothirdsMD4Transform((const __u32 *)daddr, hash) & HASH_MASK; seq += keyptr->count; - do_gettimeofday(&tv); - seq += tv.tv_usec + tv.tv_sec * 1000000; + seq += ktime_get_real().tv64; return seq; } @@ -1521,7 +1515,6 @@ __u32 secure_ip_id(__be32 daddr) __u32 secure_tcp_sequence_number(__be32 saddr, __be32 daddr, __be16 sport, __be16 dport) { - struct timeval tv; __u32 seq; __u32 hash[4]; struct keydata *keyptr = get_keyptr(); @@ -1543,12 +1536,11 @@ __u32 secure_tcp_sequence_number(__be32 * As close as possible to RFC 793, which * suggests using a 250 kHz clock. * Further reading shows this assumes 2 Mb/s networks. - * For 10 Mb/s Ethernet, a 1 MHz clock is appropriate. + * For 10 Gb/s Ethernet, a 1 GHz clock is appropriate. * That's funny, Linux has one built in! Use it! * (Networks are faster now - should this be increased?) */ - do_gettimeofday(&tv); - seq += tv.tv_usec + tv.tv_sec * 1000000; + seq += ktime_get_real().tv64; #if 0 printk("init_seq(%lx, %lx, %d, %d) = %d\n", saddr, daddr, sport, dport, seq); @@ -1556,8 +1548,6 @@ __u32 secure_tcp_sequence_number(__be32 return seq; } -EXPORT_SYMBOL(secure_tcp_sequence_number); - /* Generate secure starting point for ephemeral IPV4 transport port search */ u32 secure_ipv4_port_ephemeral(__be32 saddr, __be32 daddr, __be16 dport) { @@ -1598,7 +1588,6 @@ u32 secure_ipv6_port_ephemeral(const __b u64 secure_dccp_sequence_number(__be32 saddr, __be32 daddr, __be16 sport, __be16 dport) { - struct timeval tv; u64 seq; __u32 hash[4]; struct keydata *keyptr = get_keyptr(); @@ -1611,8 +1600,7 @@ u64 secure_dccp_sequence_number(__be32 s seq = half_md4_transform(hash, keyptr->secret); seq |= ((u64)keyptr->count) << (32 - HASH_BITS); - do_gettimeofday(&tv); - seq += tv.tv_usec + tv.tv_sec * 1000000; + seq += ktime_get_real().tv64; seq &= (1ull << 48) - 1; #if 0 printk("dccp init_seq(%lx, %lx, %d, %d) = %d\n", diff -puN drivers/connector/connector.c~git-net drivers/connector/connector.c --- a/drivers/connector/connector.c~git-net +++ a/drivers/connector/connector.c @@ -212,7 +212,7 @@ static void cn_rx_skb(struct sk_buff *__ skb = skb_get(__skb); if (skb->len >= NLMSG_SPACE(0)) { - nlh = (struct nlmsghdr *)skb->data; + nlh = nlmsg_hdr(skb); if (nlh->nlmsg_len < sizeof(struct cn_msg) || skb->len < nlh->nlmsg_len || @@ -448,7 +448,7 @@ static int __devinit cn_init(void) dev->nls = netlink_kernel_create(NETLINK_CONNECTOR, CN_NETLINK_USERS + 0xf, - dev->input, THIS_MODULE); + dev->input, NULL, THIS_MODULE); if (!dev->nls) return -EIO; diff -puN drivers/ieee1394/eth1394.c~git-net drivers/ieee1394/eth1394.c --- a/drivers/ieee1394/eth1394.c~git-net +++ a/drivers/ieee1394/eth1394.c @@ -1563,7 +1563,7 @@ static int ether1394_tx(struct sk_buff * if (memcmp(eth->h_dest, dev->broadcast, ETH1394_ALEN) == 0 || proto == htons(ETH_P_ARP) || (proto == htons(ETH_P_IP) && - IN_MULTICAST(ntohl(skb->nh.iph->daddr)))) { + IN_MULTICAST(ntohl(ip_hdr(skb)->daddr)))) { tx_type = ETH1394_GASP; dest_node = LOCAL_BUS | ALL_NODES; max_payload = priv->bc_maxpayload - ETHER1394_GASP_OVERHEAD; diff -puN drivers/ieee1394/eth1394.h~git-net drivers/ieee1394/eth1394.h --- a/drivers/ieee1394/eth1394.h~git-net +++ a/drivers/ieee1394/eth1394.h @@ -83,7 +83,7 @@ struct eth1394hdr { static inline struct eth1394hdr *eth1394_hdr(const struct sk_buff *skb) { - return (struct eth1394hdr *)skb->mac.raw; + return (struct eth1394hdr *)skb_mac_header(skb); } typedef enum {ETH1394_GASP, ETH1394_WRREQ} eth1394_tx_type; diff -puN drivers/infiniband/hw/amso1100/c2.c~git-net drivers/infiniband/hw/amso1100/c2.c --- a/drivers/infiniband/hw/amso1100/c2.c~git-net +++ a/drivers/infiniband/hw/amso1100/c2.c @@ -439,7 +439,8 @@ static void c2_rx_error(struct c2_port * } /* Setup the skb for reuse since we're dropping this pkt */ - elem->skb->tail = elem->skb->data = elem->skb->head; + elem->skb->data = elem->skb->head; + skb_reset_tail_pointer(elem->skb); /* Zero out the rxp hdr in the sk_buff */ memset(elem->skb->data, 0, sizeof(*rxp_hdr)); @@ -521,9 +522,8 @@ static void c2_rx_interrupt(struct net_d * "sizeof(struct c2_rxp_hdr)". */ skb->data += sizeof(*rxp_hdr); - skb->tail = skb->data + buflen; + skb_set_tail_pointer(skb, buflen); skb->len = buflen; - skb->dev = netdev; skb->protocol = eth_type_trans(skb, netdev); netif_rx(skb); diff -puN drivers/infiniband/hw/cxgb3/iwch_cm.c~git-net drivers/infiniband/hw/cxgb3/iwch_cm.c --- a/drivers/infiniband/hw/cxgb3/iwch_cm.c~git-net +++ a/drivers/infiniband/hw/cxgb3/iwch_cm.c @@ -477,7 +477,7 @@ static void send_mpa_req(struct iwch_ep BUG_ON(skb_cloned(skb)); mpalen = sizeof(*mpa) + ep->plen; - if (skb->data + mpalen + sizeof(*req) > skb->end) { + if (skb->data + mpalen + sizeof(*req) > skb_end_pointer(skb)) { kfree_skb(skb); skb=alloc_skb(mpalen + sizeof(*req), GFP_KERNEL); if (!skb) { @@ -507,7 +507,7 @@ static void send_mpa_req(struct iwch_ep */ skb_get(skb); set_arp_failure_handler(skb, arp_failure_discard); - skb->h.raw = skb->data; + skb_reset_transport_header(skb); len = skb->len; req = (struct tx_data_wr *) skb_push(skb, sizeof(*req)); req->wr_hi = htonl(V_WR_OP(FW_WROPCODE_OFLD_TX_DATA)); @@ -559,7 +559,7 @@ static int send_mpa_reject(struct iwch_e skb_get(skb); skb->priority = CPL_PRIORITY_DATA; set_arp_failure_handler(skb, arp_failure_discard); - skb->h.raw = skb->data; + skb_reset_transport_header(skb); req = (struct tx_data_wr *) skb_push(skb, sizeof(*req)); req->wr_hi = htonl(V_WR_OP(FW_WROPCODE_OFLD_TX_DATA)); req->wr_lo = htonl(V_WR_TID(ep->hwtid)); @@ -610,7 +610,7 @@ static int send_mpa_reply(struct iwch_ep */ skb_get(skb); set_arp_failure_handler(skb, arp_failure_discard); - skb->h.raw = skb->data; + skb_reset_transport_header(skb); len = skb->len; req = (struct tx_data_wr *) skb_push(skb, sizeof(*req)); req->wr_hi = htonl(V_WR_OP(FW_WROPCODE_OFLD_TX_DATA)); @@ -821,7 +821,8 @@ static void process_mpa_reply(struct iwc /* * copy the new data into our accumulation buffer. */ - memcpy(&(ep->mpa_pkt[ep->mpa_pkt_len]), skb->data, skb->len); + skb_copy_from_linear_data(skb, &(ep->mpa_pkt[ep->mpa_pkt_len]), + skb->len); ep->mpa_pkt_len += skb->len; /* @@ -940,7 +941,8 @@ static void process_mpa_request(struct i /* * Copy the new data into our accumulation buffer. */ - memcpy(&(ep->mpa_pkt[ep->mpa_pkt_len]), skb->data, skb->len); + skb_copy_from_linear_data(skb, &(ep->mpa_pkt[ep->mpa_pkt_len]), + skb->len); ep->mpa_pkt_len += skb->len; /* @@ -1619,7 +1621,8 @@ static int terminate(struct t3cdev *tdev PDBG("%s ep %p\n", __FUNCTION__, ep); skb_pull(skb, sizeof(struct cpl_rdma_terminate)); PDBG("%s saving %d bytes of term msg\n", __FUNCTION__, skb->len); - memcpy(ep->com.qp->attr.terminate_buffer, skb->data, skb->len); + skb_copy_from_linear_data(skb, ep->com.qp->attr.terminate_buffer, + skb->len); ep->com.qp->attr.terminate_msg_len = skb->len; ep->com.qp->attr.is_terminate_local = 0; return CPL_RET_BUF_DONE; diff -puN drivers/infiniband/ulp/ipoib/ipoib_cm.c~git-net drivers/infiniband/ulp/ipoib/ipoib_cm.c --- a/drivers/infiniband/ulp/ipoib/ipoib_cm.c~git-net +++ a/drivers/infiniband/ulp/ipoib/ipoib_cm.c @@ -408,7 +408,7 @@ void ipoib_cm_handle_rx_wc(struct net_de skb_put_frags(skb, IPOIB_CM_HEAD_SIZE, wc->byte_len, newskb); skb->protocol = ((struct ipoib_header *) skb->data)->proto; - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); skb_pull(skb, IPOIB_ENCAP_LEN); dev->last_rx = jiffies; diff -puN drivers/infiniband/ulp/ipoib/ipoib_ib.c~git-net drivers/infiniband/ulp/ipoib/ipoib_ib.c --- a/drivers/infiniband/ulp/ipoib/ipoib_ib.c~git-net +++ a/drivers/infiniband/ulp/ipoib/ipoib_ib.c @@ -216,7 +216,7 @@ static void ipoib_ib_handle_rx_wc(struct if (wc->slid != priv->local_lid || wc->src_qp != priv->qp->qp_num) { skb->protocol = ((struct ipoib_header *) skb->data)->proto; - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); skb_pull(skb, IPOIB_ENCAP_LEN); dev->last_rx = jiffies; diff -puN drivers/isdn/act2000/module.c~git-net drivers/isdn/act2000/module.c --- a/drivers/isdn/act2000/module.c~git-net +++ a/drivers/isdn/act2000/module.c @@ -442,7 +442,7 @@ act2000_sendbuf(act2000_card *card, int return 0; } skb_reserve(xmit_skb, 19); - memcpy(skb_put(xmit_skb, len), skb->data, len); + skb_copy_from_linear_data(skb, skb_put(xmit_skb, len), len); } else { xmit_skb = skb_clone(skb, GFP_ATOMIC); if (!xmit_skb) { diff -puN drivers/isdn/gigaset/usb-gigaset.c~git-net drivers/isdn/gigaset/usb-gigaset.c --- a/drivers/isdn/gigaset/usb-gigaset.c~git-net +++ a/drivers/isdn/gigaset/usb-gigaset.c @@ -652,7 +652,7 @@ static int write_modem(struct cardstate * transmit data */ count = min(bcs->tx_skb->len, (unsigned) ucs->bulk_out_size); - memcpy(ucs->bulk_out_buffer, bcs->tx_skb->data, count); + skb_copy_from_linear_data(bcs->tx_skb, ucs->bulk_out_buffer, count); skb_pull(bcs->tx_skb, count); atomic_set(&ucs->busy, 1); gig_dbg(DEBUG_OUTPUT, "write_modem: send %d bytes", count); diff -puN drivers/isdn/hardware/avm/b1dma.c~git-net drivers/isdn/hardware/avm/b1dma.c --- a/drivers/isdn/hardware/avm/b1dma.c~git-net +++ a/drivers/isdn/hardware/avm/b1dma.c @@ -404,7 +404,8 @@ static void b1dma_dispatch_tx(avmcard *c printk(KERN_DEBUG "tx: put 0x%x len=%d\n", skb->data[2], txlen); #endif - memcpy(dma->sendbuf.dmabuf, skb->data+2, skb->len-2); + skb_copy_from_linear_data_offset(skb, 2, dma->sendbuf.dmabuf, + skb->len - 2); } txlen = (txlen + 3) & ~3; diff -puN drivers/isdn/hardware/avm/c4.c~git-net drivers/isdn/hardware/avm/c4.c --- a/drivers/isdn/hardware/avm/c4.c~git-net +++ a/drivers/isdn/hardware/avm/c4.c @@ -457,7 +457,8 @@ static void c4_dispatch_tx(avmcard *card printk(KERN_DEBUG "%s: tx put 0x%x len=%d\n", card->name, skb->data[2], txlen); #endif - memcpy(dma->sendbuf.dmabuf, skb->data+2, skb->len-2); + skb_copy_from_linear_data_offset(skb, 2, dma->sendbuf.dmabuf, + skb->len - 2); } txlen = (txlen + 3) & ~3; diff -puN drivers/isdn/hisax/elsa_ser.c~git-net drivers/isdn/hisax/elsa_ser.c --- a/drivers/isdn/hisax/elsa_ser.c~git-net +++ a/drivers/isdn/hisax/elsa_ser.c @@ -254,14 +254,16 @@ write_modem(struct BCState *bcs) { count = len; if (count > MAX_MODEM_BUF - fp) { count = MAX_MODEM_BUF - fp; - memcpy(cs->hw.elsa.transbuf + fp, bcs->tx_skb->data, count); + skb_copy_from_linear_data(bcs->tx_skb, + cs->hw.elsa.transbuf + fp, count); skb_pull(bcs->tx_skb, count); cs->hw.elsa.transcnt += count; ret = count; count = len - count; fp = 0; } - memcpy((cs->hw.elsa.transbuf + fp), bcs->tx_skb->data, count); + skb_copy_from_linear_data(bcs->tx_skb, + cs->hw.elsa.transbuf + fp, count); skb_pull(bcs->tx_skb, count); cs->hw.elsa.transcnt += count; ret += count; diff -puN drivers/isdn/hisax/isdnl2.c~git-net drivers/isdn/hisax/isdnl2.c --- a/drivers/isdn/hisax/isdnl2.c~git-net +++ a/drivers/isdn/hisax/isdnl2.c @@ -1293,7 +1293,8 @@ l2_pull_iqueue(struct FsmInst *fi, int e oskb = skb; skb = alloc_skb(oskb->len + i, GFP_ATOMIC); memcpy(skb_put(skb, i), header, i); - memcpy(skb_put(skb, oskb->len), oskb->data, oskb->len); + skb_copy_from_linear_data(oskb, + skb_put(skb, oskb->len), oskb->len); dev_kfree_skb(oskb); } st->l2.l2l1(st, PH_PULL | INDICATION, skb); diff -puN drivers/isdn/hysdn/hycapi.c~git-net drivers/isdn/hysdn/hycapi.c --- a/drivers/isdn/hysdn/hycapi.c~git-net +++ a/drivers/isdn/hysdn/hycapi.c @@ -398,8 +398,9 @@ static u16 hycapi_send_message(struct ca _len = CAPIMSG_LEN(skb->data); if (_len > 22) { _len2 = _len - 22; - memcpy(msghead, skb->data, 22); - memcpy(skb->data + _len2, msghead, 22); + skb_copy_from_linear_data(skb, msghead, 22); + skb_copy_to_linear_data_offset(skb, _len2, + msghead, 22); skb_pull(skb, _len2); CAPIMSG_SETLEN(skb->data, 22); retval = capilib_data_b3_req(&cinfo->ncci_head, diff -puN drivers/isdn/hysdn/hysdn_net.c~git-net drivers/isdn/hysdn/hysdn_net.c --- a/drivers/isdn/hysdn/hysdn_net.c~git-net +++ a/drivers/isdn/hysdn/hysdn_net.c @@ -214,8 +214,6 @@ hysdn_rx_netpkt(hysdn_card * card, unsig lp->stats.rx_dropped++; return; } - skb->dev = &lp->netdev; - /* copy the data */ memcpy(skb_put(skb, len), buf, len); diff -puN drivers/isdn/hysdn/hysdn_sched.c~git-net drivers/isdn/hysdn/hysdn_sched.c --- a/drivers/isdn/hysdn/hysdn_sched.c~git-net +++ a/drivers/isdn/hysdn/hysdn_sched.c @@ -113,7 +113,8 @@ hysdn_sched_tx(hysdn_card *card, unsigne (skb = hysdn_tx_netget(card)) != NULL) { if (skb->len <= maxlen) { - memcpy(buf, skb->data, skb->len); /* copy the packet to the buffer */ + /* copy the packet to the buffer */ + skb_copy_from_linear_data(skb, buf, skb->len); *len = skb->len; *chan = CHAN_NDIS_DATA; card->net_tx_busy = 1; /* we are busy sending network data */ @@ -126,7 +127,7 @@ hysdn_sched_tx(hysdn_card *card, unsigne ((skb = hycapi_tx_capiget(card)) != NULL) ) { if (skb->len <= maxlen) { - memcpy(buf, skb->data, skb->len); + skb_copy_from_linear_data(skb, buf, skb->len); *len = skb->len; *chan = CHAN_CAPI; hycapi_tx_capiack(card); diff -puN drivers/isdn/i4l/isdn_common.c~git-net drivers/isdn/i4l/isdn_common.c --- a/drivers/isdn/i4l/isdn_common.c~git-net +++ a/drivers/isdn/i4l/isdn_common.c @@ -829,7 +829,7 @@ isdn_readbchan(int di, int channel, u_ch dflag = 0; } count_put = count_pull; - memcpy(cp, skb->data, count_put); + skb_copy_from_linear_data(skb, cp, count_put); cp += count_put; len -= count_put; #ifdef CONFIG_ISDN_AUDIO diff -puN drivers/isdn/i4l/isdn_net.c~git-net drivers/isdn/i4l/isdn_net.c --- a/drivers/isdn/i4l/isdn_net.c~git-net +++ a/drivers/isdn/i4l/isdn_net.c @@ -872,7 +872,8 @@ typedef struct { static void isdn_net_log_skb(struct sk_buff * skb, isdn_net_local * lp) { - u_char *p = skb->nh.raw; /* hopefully, this was set correctly */ + /* hopefully, this was set correctly */ + const u_char *p = skb_network_header(skb); unsigned short proto = ntohs(skb->protocol); int data_ofs; ip_ports *ipp; @@ -880,7 +881,7 @@ isdn_net_log_skb(struct sk_buff * skb, i addinfo[0] = '\0'; /* This check stolen from 2.1.72 dev_queue_xmit_nit() */ - if (skb->nh.raw < skb->data || skb->nh.raw >= skb->tail) { + if (p < skb->data || skb->network_header >= skb->tail) { /* fall back to old isdn_net_log_packet method() */ char * buf = skb->data; @@ -1121,7 +1122,7 @@ isdn_net_adjust_hdr(struct sk_buff *skb, if (!skb) return; if (lp->p_encap == ISDN_NET_ENCAP_ETHER) { - int pullsize = (ulong)skb->nh.raw - (ulong)skb->data - ETH_HLEN; + const int pullsize = skb_network_offset(skb) - ETH_HLEN; if (pullsize > 0) { printk(KERN_DEBUG "isdn_net: Pull junk %d\n", pullsize); skb_pull(skb, pullsize); @@ -1366,7 +1367,7 @@ isdn_net_type_trans(struct sk_buff *skb, struct ethhdr *eth; unsigned char *rawp; - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); skb_pull(skb, ETH_HLEN); eth = eth_hdr(skb); @@ -1786,7 +1787,7 @@ isdn_net_receive(struct net_device *ndev } skb->dev = ndev; skb->pkt_type = PACKET_HOST; - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); #ifdef ISDN_DEBUG_NET_DUMP isdn_dumppkt("R:", skb->data, skb->len, 40); #endif diff -puN drivers/isdn/i4l/isdn_ppp.c~git-net drivers/isdn/i4l/isdn_ppp.c --- a/drivers/isdn/i4l/isdn_ppp.c~git-net +++ a/drivers/isdn/i4l/isdn_ppp.c @@ -1100,7 +1100,8 @@ isdn_ppp_push_higher(isdn_net_dev * net_ goto drop_packet; } skb_put(skb, skb_old->len + 128); - memcpy(skb->data, skb_old->data, skb_old->len); + skb_copy_from_linear_data(skb_old, skb->data, + skb_old->len); if (net_dev->local->ppp_slot < 0) { printk(KERN_ERR "%s: net_dev->local->ppp_slot(%d) out of range\n", __FUNCTION__, net_dev->local->ppp_slot); @@ -1167,7 +1168,7 @@ isdn_ppp_push_higher(isdn_net_dev * net_ mlp->huptimer = 0; #endif /* CONFIG_IPPP_FILTER */ skb->dev = dev; - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); netif_rx(skb); /* net_dev->local->stats.rx_packets++; done in isdn_net.c */ return; @@ -1902,7 +1903,9 @@ void isdn_ppp_mp_reassembly( isdn_net_de while( from != to ) { unsigned int len = from->len - MP_HEADER_LEN; - memcpy(skb_put(skb,len), from->data+MP_HEADER_LEN, len); + skb_copy_from_linear_data_offset(from, MP_HEADER_LEN, + skb_put(skb,len), + len); frag = from->next; isdn_ppp_mp_free_skb(mp, from); from = frag; diff -puN drivers/isdn/isdnloop/isdnloop.c~git-net drivers/isdn/isdnloop/isdnloop.c --- a/drivers/isdn/isdnloop/isdnloop.c~git-net +++ a/drivers/isdn/isdnloop/isdnloop.c @@ -415,7 +415,8 @@ isdnloop_sendbuf(int channel, struct sk_ spin_lock_irqsave(&card->isdnloop_lock, flags); nskb = dev_alloc_skb(skb->len); if (nskb) { - memcpy(skb_put(nskb, len), skb->data, len); + skb_copy_from_linear_data(skb, + skb_put(nskb, len), len); skb_queue_tail(&card->bqueue[channel], nskb); dev_kfree_skb(skb); } else diff -puN drivers/isdn/pcbit/capi.c~git-net drivers/isdn/pcbit/capi.c --- a/drivers/isdn/pcbit/capi.c~git-net +++ a/drivers/isdn/pcbit/capi.c @@ -429,8 +429,9 @@ int capi_decode_conn_ind(struct pcbit_ch if (!(info->data.setup.CallingPN = kmalloc(len - count + 1, GFP_ATOMIC))) return -1; - memcpy(info->data.setup.CallingPN, skb->data + count + 1, - len - count); + skb_copy_from_linear_data_offset(skb, count + 1, + info->data.setup.CallingPN, + len - count); info->data.setup.CallingPN[len - count] = 0; } @@ -457,8 +458,9 @@ int capi_decode_conn_ind(struct pcbit_ch if (!(info->data.setup.CalledPN = kmalloc(len - count + 1, GFP_ATOMIC))) return -1; - memcpy(info->data.setup.CalledPN, skb->data + count + 1, - len - count); + skb_copy_from_linear_data_offset(skb, count + 1, + info->data.setup.CalledPN, + len - count); info->data.setup.CalledPN[len - count] = 0; } @@ -539,7 +541,7 @@ int capi_decode_conn_actv_ind(struct pcb #ifdef DEBUG if (len > 1 && len < 31) { - memcpy(str, skb->data + 2, len - 1); + skb_copy_from_linear_data_offset(skb, 2, str, len - 1); str[len] = 0; printk(KERN_DEBUG "Connected Party Number: %s\n", str); } diff -puN drivers/media/dvb/dvb-core/dvb_net.c~git-net drivers/media/dvb/dvb-core/dvb_net.c --- a/drivers/media/dvb/dvb-core/dvb_net.c~git-net +++ a/drivers/media/dvb/dvb-core/dvb_net.c @@ -174,7 +174,7 @@ static unsigned short dvb_net_eth_type_t struct ethhdr *eth; unsigned char *rawp; - skb->mac.raw=skb->data; + skb_reset_mac_header(skb); skb_pull(skb,dev->hard_header_len); eth = eth_hdr(skb); @@ -600,6 +600,7 @@ static void dvb_net_ule( struct net_devi /* Check CRC32, we've got it in our skb already. */ unsigned short ulen = htons(priv->ule_sndu_len); unsigned short utype = htons(priv->ule_sndu_type); + const u8 *tail; struct kvec iov[3] = { { &ulen, sizeof ulen }, { &utype, sizeof utype }, @@ -613,10 +614,11 @@ static void dvb_net_ule( struct net_devi } ule_crc = iov_crc32(ule_crc, iov, 3); - expected_crc = *((u8 *)priv->ule_skb->tail - 4) << 24 | - *((u8 *)priv->ule_skb->tail - 3) << 16 | - *((u8 *)priv->ule_skb->tail - 2) << 8 | - *((u8 *)priv->ule_skb->tail - 1); + tail = skb_tail_pointer(priv->ule_skb); + expected_crc = *(tail - 4) << 24 | + *(tail - 3) << 16 | + *(tail - 2) << 8 | + *(tail - 1); if (ule_crc != expected_crc) { printk(KERN_WARNING "%lu: CRC32 check FAILED: %08x / %08x, SNDU len %d type %#x, ts_remain %d, next 2: %x.\n", priv->ts_count, ule_crc, expected_crc, priv->ule_sndu_len, priv->ule_sndu_type, ts_remain, ts_remain > 2 ? *(unsigned short *)from_where : 0); @@ -695,7 +697,9 @@ static void dvb_net_ule( struct net_devi } else { - memcpy(dest_addr, priv->ule_skb->data, ETH_ALEN); + skb_copy_from_linear_data(priv->ule_skb, + dest_addr, + ETH_ALEN); skb_pull(priv->ule_skb, ETH_ALEN); } } diff -puN drivers/message/fusion/mptlan.c~git-net drivers/message/fusion/mptlan.c --- a/drivers/message/fusion/mptlan.c~git-net +++ a/drivers/message/fusion/mptlan.c @@ -714,6 +714,7 @@ mpt_lan_sdu_send (struct sk_buff *skb, s LANSendRequest_t *pSendReq; SGETransaction32_t *pTrans; SGESimple64_t *pSimple; + const unsigned char *mac; dma_addr_t dma; unsigned long flags; int ctx; @@ -753,7 +754,7 @@ mpt_lan_sdu_send (struct sk_buff *skb, s /* Set the mac.raw pointer, since this apparently isn't getting * done before we get the skb. Pull the data pointer past the mac data. */ - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); skb_pull(skb, 12); dma = pci_map_single(mpt_dev->pcidev, skb->data, skb->len, @@ -784,6 +785,7 @@ mpt_lan_sdu_send (struct sk_buff *skb, s // IOC_AND_NETDEV_NAMES_s_s(dev), // ctx, skb, skb->data)); + mac = skb_mac_header(skb); #ifdef QLOGIC_NAA_WORKAROUND { struct NAA_Hosed *nh; @@ -793,12 +795,12 @@ mpt_lan_sdu_send (struct sk_buff *skb, s drops. */ read_lock_irq(&bad_naa_lock); for (nh = mpt_bad_naa; nh != NULL; nh=nh->next) { - if ((nh->ieee[0] == skb->mac.raw[0]) && - (nh->ieee[1] == skb->mac.raw[1]) && - (nh->ieee[2] == skb->mac.raw[2]) && - (nh->ieee[3] == skb->mac.raw[3]) && - (nh->ieee[4] == skb->mac.raw[4]) && - (nh->ieee[5] == skb->mac.raw[5])) { + if ((nh->ieee[0] == mac[0]) && + (nh->ieee[1] == mac[1]) && + (nh->ieee[2] == mac[2]) && + (nh->ieee[3] == mac[3]) && + (nh->ieee[4] == mac[4]) && + (nh->ieee[5] == mac[5])) { cur_naa = nh->NAA; dlprintk ((KERN_INFO "mptlan/sdu_send: using NAA value " "= %04x.\n", cur_naa)); @@ -810,12 +812,12 @@ mpt_lan_sdu_send (struct sk_buff *skb, s #endif pTrans->TransactionDetails[0] = cpu_to_le32((cur_naa << 16) | - (skb->mac.raw[0] << 8) | - (skb->mac.raw[1] << 0)); - pTrans->TransactionDetails[1] = cpu_to_le32((skb->mac.raw[2] << 24) | - (skb->mac.raw[3] << 16) | - (skb->mac.raw[4] << 8) | - (skb->mac.raw[5] << 0)); + (mac[0] << 8) | + (mac[1] << 0)); + pTrans->TransactionDetails[1] = cpu_to_le32((mac[2] << 24) | + (mac[3] << 16) | + (mac[4] << 8) | + (mac[5] << 0)); pSimple = (SGESimple64_t *) &pTrans->TransactionDetails[2]; @@ -930,7 +932,7 @@ mpt_lan_receive_post_turbo(struct net_de pci_dma_sync_single_for_cpu(mpt_dev->pcidev, priv->RcvCtl[ctx].dma, priv->RcvCtl[ctx].len, PCI_DMA_FROMDEVICE); - memcpy(skb_put(skb, len), old_skb->data, len); + skb_copy_from_linear_data(old_skb, skb_put(skb, len), len); pci_dma_sync_single_for_device(mpt_dev->pcidev, priv->RcvCtl[ctx].dma, priv->RcvCtl[ctx].len, PCI_DMA_FROMDEVICE); @@ -1091,7 +1093,7 @@ mpt_lan_receive_post_reply(struct net_de priv->RcvCtl[ctx].dma, priv->RcvCtl[ctx].len, PCI_DMA_FROMDEVICE); - memcpy(skb_put(skb, l), old_skb->data, l); + skb_copy_from_linear_data(old_skb, skb_put(skb, l), l); pci_dma_sync_single_for_device(mpt_dev->pcidev, priv->RcvCtl[ctx].dma, @@ -1120,7 +1122,7 @@ mpt_lan_receive_post_reply(struct net_de priv->RcvCtl[ctx].len, PCI_DMA_FROMDEVICE); - memcpy(skb_put(skb, len), old_skb->data, len); + skb_copy_from_linear_data(old_skb, skb_put(skb, len), len); pci_dma_sync_single_for_device(mpt_dev->pcidev, priv->RcvCtl[ctx].dma, @@ -1549,7 +1551,7 @@ mpt_lan_type_trans(struct sk_buff *skb, struct mpt_lan_ohdr *fch = (struct mpt_lan_ohdr *)skb->data; struct fcllc *fcllc; - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); skb_pull(skb, sizeof(struct mpt_lan_ohdr)); if (fch->dtype == htons(0xffff)) { diff -puN drivers/net/3c501.c~git-net drivers/net/3c501.c --- a/drivers/net/3c501.c~git-net +++ a/drivers/net/3c501.c @@ -735,7 +735,6 @@ static void el_receive(struct net_device else { skb_reserve(skb,2); /* Force 16 byte alignment */ - skb->dev = dev; /* * The read increments through the bytes. The interrupt * handler will fix the pointer when it returns to diff -puN drivers/net/3c505.c~git-net drivers/net/3c505.c --- a/drivers/net/3c505.c~git-net +++ a/drivers/net/3c505.c @@ -615,7 +615,6 @@ static void receive_packet(struct net_de if (test_and_set_bit(0, (void *) &adapter->dmaing)) printk(KERN_ERR "%s: rx blocked, DMA in progress, dir %d\n", dev->name, adapter->current_dma.direction); - skb->dev = dev; adapter->current_dma.direction = 0; adapter->current_dma.length = rlen; adapter->current_dma.skb = skb; @@ -1026,7 +1025,7 @@ static int send_packet(struct net_device adapter->current_dma.start_time = jiffies; if ((unsigned long)(skb->data + nlen) >= MAX_DMA_ADDRESS || nlen != skb->len) { - memcpy(adapter->dma_buffer, skb->data, nlen); + skb_copy_from_linear_data(skb, adapter->dma_buffer, nlen); memset(adapter->dma_buffer+skb->len, 0, nlen-skb->len); target = isa_virt_to_bus(adapter->dma_buffer); } diff -puN drivers/net/3c507.c~git-net drivers/net/3c507.c --- a/drivers/net/3c507.c~git-net +++ a/drivers/net/3c507.c @@ -873,7 +873,6 @@ static void el16_rx(struct net_device *d } skb_reserve(skb,2); - skb->dev = dev; /* 'skb->data' points to the start of sk_buff data area. */ memcpy_fromio(skb_put(skb,pkt_len), data_frame + 10, pkt_len); diff -puN drivers/net/3c509.c~git-net drivers/net/3c509.c --- a/drivers/net/3c509.c~git-net +++ a/drivers/net/3c509.c @@ -1090,7 +1090,6 @@ el3_rx(struct net_device *dev) printk("Receiving packet size %d status %4.4x.\n", pkt_len, rx_status); if (skb != NULL) { - skb->dev = dev; skb_reserve(skb, 2); /* Align IP on 16 byte */ /* 'skb->data' points to the start of sk_buff data area. */ diff -puN drivers/net/3c515.c~git-net drivers/net/3c515.c --- a/drivers/net/3c515.c~git-net +++ a/drivers/net/3c515.c @@ -1292,7 +1292,6 @@ static int corkscrew_rx(struct net_devic printk("Receiving packet size %d status %4.4x.\n", pkt_len, rx_status); if (skb != NULL) { - skb->dev = dev; skb_reserve(skb, 2); /* Align IP on 16 byte boundaries */ /* 'skb_put()' points to the start of sk_buff data area. */ insl(ioaddr + RX_FIFO, @@ -1363,7 +1362,6 @@ static int boomerang_rx(struct net_devic copying to a properly sized skbuff. */ if (pkt_len < rx_copybreak && (skb = dev_alloc_skb(pkt_len + 4)) != 0) { - skb->dev = dev; skb_reserve(skb, 2); /* Align IP on 16 byte boundaries */ /* 'skb_put()' points to the start of sk_buff data area. */ memcpy(skb_put(skb, pkt_len), diff -puN drivers/net/3c523.c~git-net drivers/net/3c523.c --- a/drivers/net/3c523.c~git-net +++ a/drivers/net/3c523.c @@ -988,7 +988,6 @@ static void elmc_rcv_int(struct net_devi rbd->status = 0; skb = (struct sk_buff *) dev_alloc_skb(totlen + 2); if (skb != NULL) { - skb->dev = dev; skb_reserve(skb, 2); /* 16 byte alignment */ skb_put(skb,totlen); eth_copy_and_sum(skb, (char *) p->base+(unsigned long) rbd->buffer,totlen,0); @@ -1146,7 +1145,7 @@ static int elmc_send_packet(struct sk_bu if (len != skb->len) memset((char *) p->xmit_cbuffs[p->xmit_count], 0, ETH_ZLEN); - memcpy((char *) p->xmit_cbuffs[p->xmit_count], (char *) (skb->data), skb->len); + skb_copy_from_linear_data(skb, (char *) p->xmit_cbuffs[p->xmit_count], skb->len); #if (NUM_XMIT_BUFFS == 1) #ifdef NO_NOPCOMMANDS diff -puN drivers/net/3c527.c~git-net drivers/net/3c527.c --- a/drivers/net/3c527.c~git-net +++ a/drivers/net/3c527.c @@ -1189,7 +1189,6 @@ static void mc32_rx_ring(struct net_devi } skb->protocol=eth_type_trans(skb,dev); - skb->dev=dev; dev->last_rx = jiffies; lp->net_stats.rx_packets++; lp->net_stats.rx_bytes += length; diff -puN drivers/net/3c59x.c~git-net drivers/net/3c59x.c --- a/drivers/net/3c59x.c~git-net +++ a/drivers/net/3c59x.c @@ -2413,7 +2413,6 @@ static int vortex_rx(struct net_device * printk(KERN_DEBUG "Receiving packet size %d status %4.4x.\n", pkt_len, rx_status); if (skb != NULL) { - skb->dev = dev; skb_reserve(skb, 2); /* Align IP on 16 byte boundaries */ /* 'skb_put()' points to the start of sk_buff data area. */ if (vp->bus_master && @@ -2490,7 +2489,6 @@ boomerang_rx(struct net_device *dev) /* Check if the packet is long enough to just accept without copying to a properly sized skbuff. */ if (pkt_len < rx_copybreak && (skb = dev_alloc_skb(pkt_len + 2)) != 0) { - skb->dev = dev; skb_reserve(skb, 2); /* Align IP on 16 byte boundaries */ pci_dma_sync_single_for_cpu(VORTEX_PCI(vp), dma, PKT_BUF_SZ, PCI_DMA_FROMDEVICE); /* 'skb_put()' points to the start of sk_buff data area. */ diff -puN drivers/net/7990.c~git-net drivers/net/7990.c --- a/drivers/net/7990.c~git-net +++ a/drivers/net/7990.c @@ -331,7 +331,6 @@ static int lance_rx (struct net_device * return 0; } - skb->dev = dev; skb_reserve (skb, 2); /* 16 byte align */ skb_put (skb, len); /* make room */ eth_copy_and_sum(skb, @@ -568,7 +567,7 @@ int lance_start_xmit (struct sk_buff *sk if (skb->len < ETH_ZLEN) memset((char *)&ib->tx_buf[entry][0], 0, ETH_ZLEN); - memcpy ((char *)&ib->tx_buf [entry][0], skb->data, skblen); + skb_copy_from_linear_data(skb, &ib->tx_buf[entry][0], skblen); /* Now, give the packet to the lance */ ib->btx_ring [entry].tmd1_bits = (LE_T1_POK|LE_T1_OWN); diff -puN drivers/net/8139cp.c~git-net drivers/net/8139cp.c --- a/drivers/net/8139cp.c~git-net +++ a/drivers/net/8139cp.c @@ -573,7 +573,6 @@ rx_status_loop: } skb_reserve(new_skb, RX_OFFSET); - new_skb->dev = dev; pci_unmap_single(cp->pdev, mapping, buflen, PCI_DMA_FROMDEVICE); @@ -807,7 +806,7 @@ static int cp_start_xmit (struct sk_buff if (mss) flags |= LargeSend | ((mss & MSSMask) << MSSShift); else if (skb->ip_summed == CHECKSUM_PARTIAL) { - const struct iphdr *ip = skb->nh.iph; + const struct iphdr *ip = ip_hdr(skb); if (ip->protocol == IPPROTO_TCP) flags |= IPCS | TCPCS; else if (ip->protocol == IPPROTO_UDP) @@ -826,7 +825,7 @@ static int cp_start_xmit (struct sk_buff u32 first_len, first_eor; dma_addr_t first_mapping; int frag, first_entry = entry; - const struct iphdr *ip = skb->nh.iph; + const struct iphdr *ip = ip_hdr(skb); /* We must give this initial chunk to the device last. * Otherwise we could race with the device. @@ -1082,7 +1081,6 @@ static int cp_refill_rx (struct cp_priva if (!skb) goto err_out; - skb->dev = cp->dev; skb_reserve(skb, RX_OFFSET); mapping = pci_map_single(cp->pdev, skb->data, cp->rx_buf_sz, diff -puN drivers/net/8139too.c~git-net drivers/net/8139too.c --- a/drivers/net/8139too.c~git-net +++ a/drivers/net/8139too.c @@ -1904,10 +1904,10 @@ static __inline__ void wrap_copy(struct u32 left = RX_BUF_LEN - offset; if (size > left) { - memcpy(skb->data, ring + offset, left); - memcpy(skb->data+left, ring, size - left); + skb_copy_to_linear_data(skb, ring + offset, left); + skb_copy_to_linear_data_offset(skb, left, ring, size - left); } else - memcpy(skb->data, ring + offset, size); + skb_copy_to_linear_data(skb, ring + offset, size); } #endif @@ -2013,7 +2013,6 @@ no_early_rx: skb = dev_alloc_skb (pkt_size + 2); if (likely(skb)) { - skb->dev = dev; skb_reserve (skb, 2); /* 16 byte align the IP fields. */ #if RX_BUF_IDX == 3 wrap_copy(skb, rx_ring, ring_offset+4, pkt_size); diff -puN drivers/net/82596.c~git-net drivers/net/82596.c --- a/drivers/net/82596.c~git-net +++ a/drivers/net/82596.c @@ -830,7 +830,6 @@ memory_squeeze: lp->stats.rx_dropped++; } else { - skb->dev = dev; if (!rx_in_place) { /* 16 byte align the data fields */ skb_reserve(skb, 2); diff -puN drivers/net/Makefile~git-net drivers/net/Makefile --- a/drivers/net/Makefile~git-net +++ a/drivers/net/Makefile @@ -208,7 +208,7 @@ obj-$(CONFIG_TR) += tokenring/ obj-$(CONFIG_WAN) += wan/ obj-$(CONFIG_ARCNET) += arcnet/ obj-$(CONFIG_NET_PCMCIA) += pcmcia/ -obj-$(CONFIG_NET_RADIO) += wireless/ +obj-y += wireless/ obj-$(CONFIG_NET_TULIP) += tulip/ obj-$(CONFIG_HAMRADIO) += hamradio/ obj-$(CONFIG_IRDA) += irda/ diff -puN drivers/net/a2065.c~git-net drivers/net/a2065.c --- a/drivers/net/a2065.c~git-net +++ a/drivers/net/a2065.c @@ -320,7 +320,6 @@ static int lance_rx (struct net_device * return 0; } - skb->dev = dev; skb_reserve (skb, 2); /* 16 byte align */ skb_put (skb, len); /* make room */ eth_copy_and_sum(skb, @@ -599,7 +598,7 @@ static int lance_start_xmit (struct sk_b ib->btx_ring [entry].length = (-len) | 0xf000; ib->btx_ring [entry].misc = 0; - memcpy ((char *)&ib->tx_buf [entry][0], skb->data, skblen); + skb_copy_from_linear_data(skb, &ib->tx_buf [entry][0], skblen); /* Clear the slack of the packet, do I need this? */ if (len != skblen) diff -puN drivers/net/acenic.c~git-net drivers/net/acenic.c --- a/drivers/net/acenic.c~git-net +++ a/drivers/net/acenic.c @@ -2027,7 +2027,6 @@ static void ace_rx_int(struct net_device */ csum = retdesc->tcp_udp_csum; - skb->dev = dev; skb->protocol = eth_type_trans(skb, dev); /* diff -puN drivers/net/amd8111e.c~git-net drivers/net/amd8111e.c --- a/drivers/net/amd8111e.c~git-net +++ a/drivers/net/amd8111e.c @@ -798,9 +798,7 @@ static int amd8111e_rx_poll(struct net_d pci_unmap_single(lp->pci_dev,lp->rx_dma_addr[rx_index], lp->rx_buff_len-2, PCI_DMA_FROMDEVICE); skb_put(skb, pkt_len); - skb->dev = dev; lp->rx_skbuff[rx_index] = new_skb; - new_skb->dev = dev; lp->rx_dma_addr[rx_index] = pci_map_single(lp->pci_dev, new_skb->data, lp->rx_buff_len-2, @@ -926,9 +924,7 @@ static int amd8111e_rx(struct net_device pci_unmap_single(lp->pci_dev,lp->rx_dma_addr[rx_index], lp->rx_buff_len-2, PCI_DMA_FROMDEVICE); skb_put(skb, pkt_len); - skb->dev = dev; lp->rx_skbuff[rx_index] = new_skb; - new_skb->dev = dev; lp->rx_dma_addr[rx_index] = pci_map_single(lp->pci_dev, new_skb->data, lp->rx_buff_len-2,PCI_DMA_FROMDEVICE); diff -puN drivers/net/appletalk/cops.c~git-net drivers/net/appletalk/cops.c --- a/drivers/net/appletalk/cops.c~git-net +++ a/drivers/net/appletalk/cops.c @@ -853,9 +853,9 @@ static void cops_rx(struct net_device *d return; } - skb->mac.raw = skb->data; /* Point to entire packet. */ + skb_reset_mac_header(skb); /* Point to entire packet. */ skb_pull(skb,3); - skb->h.raw = skb->data; /* Point to data (Skip header). */ + skb_reset_transport_header(skb); /* Point to data (Skip header). */ /* Update the counters. */ lp->stats.rx_packets++; diff -puN drivers/net/appletalk/ltpc.c~git-net drivers/net/appletalk/ltpc.c --- a/drivers/net/appletalk/ltpc.c~git-net +++ a/drivers/net/appletalk/ltpc.c @@ -770,13 +770,13 @@ static int sendup_buffer (struct net_dev skb->data[0] = dnode; skb->data[1] = snode; skb->data[2] = llaptype; - skb->mac.raw = skb->data; /* save pointer to llap header */ + skb_reset_mac_header(skb); /* save pointer to llap header */ skb_pull(skb,3); /* copy ddp(s,e)hdr + contents */ - memcpy(skb->data,(void*)ltdmabuf,len); + skb_copy_to_linear_data(skb, ltdmabuf, len); - skb->h.raw = skb->data; + skb_reset_transport_header(skb); stats->rx_packets++; stats->rx_bytes+=skb->len; @@ -917,13 +917,14 @@ static int ltpc_xmit(struct sk_buff *skb int i; struct lt_sendlap cbuf; + unsigned char *hdr; cbuf.command = LT_SENDLAP; cbuf.dnode = skb->data[0]; cbuf.laptype = skb->data[2]; skb_pull(skb,3); /* skip past LLAP header */ cbuf.length = skb->len; /* this is host order */ - skb->h.raw=skb->data; + skb_reset_transport_header(skb); if(debug & DEBUG_UPPER) { printk("command "); @@ -932,11 +933,13 @@ static int ltpc_xmit(struct sk_buff *skb printk("\n"); } - do_write(dev,&cbuf,sizeof(cbuf),skb->h.raw,skb->len); + hdr = skb_transport_header(skb); + do_write(dev, &cbuf, sizeof(cbuf), hdr, skb->len); if(debug & DEBUG_UPPER) { printk("sent %d ddp bytes\n",skb->len); - for(i=0;ilen;i++) printk("%02x ",skb->h.raw[i]); + for (i = 0; i < skb->len; i++) + printk("%02x ", hdr[i]); printk("\n"); } diff -puN drivers/net/arcnet/arc-rawmode.c~git-net drivers/net/arcnet/arc-rawmode.c --- a/drivers/net/arcnet/arc-rawmode.c~git-net +++ a/drivers/net/arcnet/arc-rawmode.c @@ -110,7 +110,7 @@ static void rx(struct net_device *dev, i pkt = (struct archdr *) skb->data; - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); skb_pull(skb, ARC_HDR_SIZE); /* up to sizeof(pkt->soft) has already been copied from the card */ diff -puN drivers/net/arcnet/arcnet.c~git-net drivers/net/arcnet/arcnet.c --- a/drivers/net/arcnet/arcnet.c~git-net +++ a/drivers/net/arcnet/arcnet.c @@ -519,9 +519,12 @@ static int arcnet_header(struct sk_buff * real header when we do rebuild_header. */ *(uint16_t *) skb_push(skb, 2) = type; - if (skb->nh.raw - skb->mac.raw != 2) + /* + * XXX: Why not use skb->mac_len? + */ + if (skb->network_header - skb->mac_header != 2) BUGMSG(D_NORMAL, "arcnet_header: Yikes! diff (%d) is not 2!\n", - (int)(skb->nh.raw - skb->mac.raw)); + (int)(skb->network_header - skb->mac_header)); return -2; /* return error -- can't transmit yet! */ } else { @@ -554,11 +557,13 @@ static int arcnet_rebuild_header(struct unsigned short type; uint8_t daddr=0; struct ArcProto *proto; - - if (skb->nh.raw - skb->mac.raw != 2) { + /* + * XXX: Why not use skb->mac_len? + */ + if (skb->network_header - skb->mac_header != 2) { BUGMSG(D_NORMAL, - "rebuild_header: shouldn't be here! (hdrsize=%d)\n", - (int)(skb->nh.raw - skb->mac.raw)); + "rebuild_header: shouldn't be here! (hdrsize=%d)\n", + (int)(skb->network_header - skb->mac_header)); return 0; } type = *(uint16_t *) skb_pull(skb, 2); diff -puN drivers/net/arcnet/capmode.c~git-net drivers/net/arcnet/capmode.c --- a/drivers/net/arcnet/capmode.c~git-net +++ a/drivers/net/arcnet/capmode.c @@ -122,10 +122,8 @@ static void rx(struct net_device *dev, i } skb_put(skb, length + ARC_HDR_SIZE + sizeof(int)); skb->dev = dev; - - pkt = (struct archdr *) skb->data; - - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); + pkt = (struct archdr *)skb_mac_header(skb); skb_pull(skb, ARC_HDR_SIZE); /* up to sizeof(pkt->soft) has already been copied from the card */ @@ -270,13 +268,13 @@ static int ack_tx(struct net_device *dev skb_put(ackskb, length + ARC_HDR_SIZE ); ackskb->dev = dev; - ackpkt = (struct archdr *) ackskb->data; - - ackskb->mac.raw = ackskb->data; + skb_reset_mac_header(ackskb); + ackpkt = (struct archdr *)skb_mac_header(ackskb); /* skb_pull(ackskb, ARC_HDR_SIZE); */ - memcpy(ackpkt, lp->outgoing.skb->data, ARC_HDR_SIZE+sizeof(struct arc_cap)); + skb_copy_from_linear_data(lp->outgoing.skb, ackpkt, + ARC_HDR_SIZE + sizeof(struct arc_cap)); ackpkt->soft.cap.proto=0; /* using protocol 0 for acknowledge */ ackpkt->soft.cap.mes.ack=acked; diff -puN drivers/net/arcnet/rfc1051.c~git-net drivers/net/arcnet/rfc1051.c --- a/drivers/net/arcnet/rfc1051.c~git-net +++ a/drivers/net/arcnet/rfc1051.c @@ -94,7 +94,7 @@ static unsigned short type_trans(struct int hdr_size = ARC_HDR_SIZE + RFC1051_HDR_SIZE; /* Pull off the arcnet header. */ - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); skb_pull(skb, hdr_size); if (pkt->hard.dest == 0) diff -puN drivers/net/arcnet/rfc1201.c~git-net drivers/net/arcnet/rfc1201.c --- a/drivers/net/arcnet/rfc1201.c~git-net +++ a/drivers/net/arcnet/rfc1201.c @@ -96,7 +96,7 @@ static unsigned short type_trans(struct int hdr_size = ARC_HDR_SIZE + RFC1201_HDR_SIZE; /* Pull off the arcnet header. */ - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); skb_pull(skb, hdr_size); if (pkt->hard.dest == 0) diff -puN drivers/net/ariadne.c~git-net drivers/net/ariadne.c --- a/drivers/net/ariadne.c~git-net +++ a/drivers/net/ariadne.c @@ -743,7 +743,6 @@ static int ariadne_rx(struct net_device } - skb->dev = dev; skb_reserve(skb,2); /* 16 byte align */ skb_put(skb,pkt_len); /* Make room */ eth_copy_and_sum(skb, (char *)priv->rx_buff[entry], pkt_len,0); diff -puN drivers/net/arm/am79c961a.c~git-net drivers/net/arm/am79c961a.c --- a/drivers/net/arm/am79c961a.c~git-net +++ a/drivers/net/arm/am79c961a.c @@ -526,7 +526,6 @@ am79c961_rx(struct net_device *dev, stru skb = dev_alloc_skb(len + 2); if (skb) { - skb->dev = dev; skb_reserve(skb, 2); am_readbuffer(dev, pktaddr, skb_put(skb, len), len); diff -puN drivers/net/arm/at91_ether.c~git-net drivers/net/arm/at91_ether.c --- a/drivers/net/arm/at91_ether.c~git-net +++ a/drivers/net/arm/at91_ether.c @@ -858,7 +858,6 @@ static void at91ether_rx(struct net_devi skb_reserve(skb, 2); memcpy(skb_put(skb, pktlen), p_recv, pktlen); - skb->dev = dev; skb->protocol = eth_type_trans(skb, dev); dev->last_rx = jiffies; lp->stats.rx_bytes += pktlen; diff -puN drivers/net/arm/ep93xx_eth.c~git-net drivers/net/arm/ep93xx_eth.c --- a/drivers/net/arm/ep93xx_eth.c~git-net +++ a/drivers/net/arm/ep93xx_eth.c @@ -255,7 +255,6 @@ static int ep93xx_rx(struct net_device * skb = dev_alloc_skb(length + 2); if (likely(skb != NULL)) { - skb->dev = dev; skb_reserve(skb, 2); dma_sync_single(NULL, ep->descs->rdesc[entry].buf_addr, length, DMA_FROM_DEVICE); diff -puN drivers/net/arm/ether1.c~git-net drivers/net/arm/ether1.c --- a/drivers/net/arm/ether1.c~git-net +++ a/drivers/net/arm/ether1.c @@ -875,7 +875,6 @@ ether1_recv_done (struct net_device *dev skb = dev_alloc_skb (length + 2); if (skb) { - skb->dev = dev; skb_reserve (skb, 2); ether1_readbuffer (dev, skb_put (skb, length), rbd.rbd_bufl, length); diff -puN drivers/net/arm/ether3.c~git-net drivers/net/arm/ether3.c --- a/drivers/net/arm/ether3.c~git-net +++ a/drivers/net/arm/ether3.c @@ -661,7 +661,6 @@ if (next_ptr < RX_START || next_ptr >= R if (skb) { unsigned char *buf; - skb->dev = dev; skb_reserve(skb, 2); buf = skb_put(skb, length); ether3_readbuffer(dev, buf + 12, length - 12); diff -puN drivers/net/at1700.c~git-net drivers/net/at1700.c --- a/drivers/net/at1700.c~git-net +++ a/drivers/net/at1700.c @@ -768,7 +768,6 @@ net_rx(struct net_device *dev) lp->stats.rx_dropped++; break; } - skb->dev = dev; skb_reserve(skb,2); insw(ioaddr + DATAPORT, skb_put(skb,pkt_len), (pkt_len + 1) >> 1); diff -puN drivers/net/atari_bionet.c~git-net drivers/net/atari_bionet.c --- a/drivers/net/atari_bionet.c~git-net +++ a/drivers/net/atari_bionet.c @@ -453,7 +453,8 @@ bionet_send_packet(struct sk_buff *skb, stdma_lock(bionet_intr, NULL); local_irq_restore(flags); if( !STRAM_ADDR(buf+length-1) ) { - memcpy(nic_packet->buffer, skb->data, length); + skb_copy_from_linear_data(skb, nic_packet->buffer, + length); buf = (unsigned long)&((struct nic_pkt_s *)phys_nic_packet)->buffer; } @@ -544,13 +545,13 @@ bionet_poll_rx(struct net_device *dev) { break; } - skb->dev = dev; skb_reserve( skb, 2 ); /* 16 Byte align */ skb_put( skb, pkt_len ); /* make room */ /* 'skb->data' points to the start of sk_buff data area. */ - memcpy(skb->data, nic_packet->buffer, pkt_len); + skb_copy_to_linear_data(skb, nic_packet->buffer, + pkt_len); skb->protocol = eth_type_trans( skb, dev ); netif_rx(skb); dev->last_rx = jiffies; diff -puN drivers/net/atari_pamsnet.c~git-net drivers/net/atari_pamsnet.c --- a/drivers/net/atari_pamsnet.c~git-net +++ a/drivers/net/atari_pamsnet.c @@ -717,7 +717,8 @@ pamsnet_send_packet(struct sk_buff *skb, local_irq_restore(flags); if( !STRAM_ADDR(buf+length-1) ) { - memcpy(nic_packet->buffer, skb->data, length); + skb_copy_from_linear_data(skb, nic_packet->buffer, + length); buf = (unsigned long)phys_nic_packet; } @@ -792,7 +793,8 @@ pamsnet_poll_rx(struct net_device *dev) /* 'skb->data' points to the start of sk_buff data area. */ - memcpy(skb->data, nic_packet->buffer, pkt_len); + skb_copy_to_linear_data(skb, nic_packet->buffer, + pkt_len); netif_rx(skb); dev->last_rx = jiffies; lp->stats.rx_packets++; diff -puN drivers/net/atarilance.c~git-net drivers/net/atarilance.c --- a/drivers/net/atarilance.c~git-net +++ a/drivers/net/atarilance.c @@ -1047,7 +1047,6 @@ static int lance_rx( struct net_device * pkt_len ); } - skb->dev = dev; skb_reserve( skb, 2 ); /* 16 byte align */ skb_put( skb, pkt_len ); /* Make room */ lp->memcpy_f( skb->data, PKTBUF_ADDR(head), pkt_len ); diff -puN drivers/net/atl1/atl1_main.c~git-net drivers/net/atl1/atl1_main.c --- a/drivers/net/atl1/atl1_main.c~git-net +++ a/drivers/net/atl1/atl1_main.c @@ -408,7 +408,6 @@ static void atl1_rx_checksum(struct atl1 static u16 atl1_alloc_rx_buffers(struct atl1_adapter *adapter) { struct atl1_rfd_ring *rfd_ring = &adapter->rfd_ring; - struct net_device *netdev = adapter->netdev; struct pci_dev *pdev = adapter->pdev; struct page *page; unsigned long offset; @@ -444,7 +443,6 @@ static u16 atl1_alloc_rx_buffers(struct * the 14 byte MAC header is removed */ skb_reserve(skb, NET_IP_ALIGN); - skb->dev = netdev; buffer_info->alloced = 1; buffer_info->skb = skb; @@ -1296,19 +1294,21 @@ static int atl1_tso(struct atl1_adapter } if (skb->protocol == ntohs(ETH_P_IP)) { - skb->nh.iph->tot_len = 0; - skb->nh.iph->check = 0; - skb->h.th->check = - ~csum_tcpudp_magic(skb->nh.iph->saddr, - skb->nh.iph->daddr, 0, - IPPROTO_TCP, 0); - ipofst = skb->nh.raw - skb->data; + struct iphdr *iph = ip_hdr(skb); + + iph->tot_len = 0; + iph->check = 0; + tcp_hdr(skb)->check = ~csum_tcpudp_magic(iph->saddr, + iph->daddr, 0, + IPPROTO_TCP, + 0); + ipofst = skb_network_offset(skb); if (ipofst != ENET_HEADER_SIZE) /* 802.3 frame */ tso->tsopl |= 1 << TSO_PARAM_ETHTYPE_SHIFT; - tso->tsopl |= (skb->nh.iph->ihl & + tso->tsopl |= (iph->ihl & CSUM_PARAM_IPHL_MASK) << CSUM_PARAM_IPHL_SHIFT; - tso->tsopl |= ((skb->h.th->doff << 2) & + tso->tsopl |= (tcp_hdrlen(skb) & TSO_PARAM_TCPHDRLEN_MASK) << TSO_PARAM_TCPHDRLEN_SHIFT; tso->tsopl |= (skb_shinfo(skb)->gso_size & TSO_PARAM_MSS_MASK) << TSO_PARAM_MSS_SHIFT; @@ -1327,8 +1327,8 @@ static int atl1_tx_csum(struct atl1_adap u8 css, cso; if (likely(skb->ip_summed == CHECKSUM_PARTIAL)) { - cso = skb->h.raw - skb->data; - css = (skb->h.raw + skb->csum_offset) - skb->data; + cso = skb_transport_offset(skb); + css = cso + skb->csum; if (unlikely(cso & 0x1)) { printk(KERN_DEBUG "%s: payload offset != even number\n", atl1_driver_name); @@ -1370,8 +1370,7 @@ static void atl1_tx_map(struct atl1_adap if (tcp_seg) { /* TSO/GSO */ - proto_hdr_len = - ((skb->h.raw - skb->data) + (skb->h.th->doff << 2)); + proto_hdr_len = skb_transport_offset(skb) + tcp_hdrlen(skb); buffer_info->length = proto_hdr_len; page = virt_to_page(skb->data); offset = (unsigned long)skb->data & ~PAGE_MASK; @@ -1563,8 +1562,8 @@ static int atl1_xmit_frame(struct sk_buf mss = skb_shinfo(skb)->gso_size; if (mss) { if (skb->protocol == htons(ETH_P_IP)) { - proto_hdr_len = ((skb->h.raw - skb->data) + - (skb->h.th->doff << 2)); + proto_hdr_len = (skb_transport_offset(skb) + + tcp_hdrlen(skb)); if (unlikely(proto_hdr_len > len)) { dev_kfree_skb_any(skb); return NETDEV_TX_OK; diff -puN drivers/net/atp.c~git-net drivers/net/atp.c --- a/drivers/net/atp.c~git-net +++ a/drivers/net/atp.c @@ -793,7 +793,6 @@ static void net_rx(struct net_device *de lp->stats.rx_dropped++; goto done; } - skb->dev = dev; skb_reserve(skb, 2); /* Align IP on 16 byte boundaries */ read_block(ioaddr, pkt_len, skb_put(skb,pkt_len), dev->if_port); diff -puN drivers/net/au1000_eth.c~git-net drivers/net/au1000_eth.c --- a/drivers/net/au1000_eth.c~git-net +++ a/drivers/net/au1000_eth.c @@ -1125,7 +1125,7 @@ static int au1000_tx(struct sk_buff *skb } pDB = aup->tx_db_inuse[aup->tx_head]; - memcpy((void *)pDB->vaddr, skb->data, skb->len); + skb_copy_from_linear_data(skb, pDB->vaddr, skb->len); if (skb->len < ETH_ZLEN) { for (i=skb->len; ivaddr)[i] = 0; @@ -1205,7 +1205,6 @@ static int au1000_rx(struct net_device * aup->stats.rx_dropped++; continue; } - skb->dev = dev; skb_reserve(skb, 2); /* 16 byte IP header align */ eth_copy_and_sum(skb, (unsigned char *)pDB->vaddr, frmlen, 0); diff -puN drivers/net/b44.c~git-net drivers/net/b44.c --- a/drivers/net/b44.c~git-net +++ a/drivers/net/b44.c @@ -825,12 +825,11 @@ static int b44_rx(struct b44 *bp, int bu if (copy_skb == NULL) goto drop_it_no_recycle; - copy_skb->dev = bp->dev; skb_reserve(copy_skb, 2); skb_put(copy_skb, len); /* DMA sync done above, copy just the actual packet */ - memcpy(copy_skb->data, skb->data+bp->rx_offset, len); - + skb_copy_from_linear_data_offset(skb, bp->rx_offset, + copy_skb->data, len); skb = copy_skb; } skb->ip_summed = CHECKSUM_NONE; @@ -1007,7 +1006,8 @@ static int b44_start_xmit(struct sk_buff goto err_out; } - memcpy(skb_put(bounce_skb, len), skb->data, skb->len); + skb_copy_from_linear_data(skb, skb_put(bounce_skb, len), + skb->len); dev_kfree_skb_any(skb); skb = bounce_skb; } diff -puN drivers/net/bmac.c~git-net drivers/net/bmac.c --- a/drivers/net/bmac.c~git-net +++ a/drivers/net/bmac.c @@ -715,7 +715,6 @@ static irqreturn_t bmac_rxdma_intr(int i if (skb != NULL) { nb -= ETHERCRC; skb_put(skb, nb); - skb->dev = dev; skb->protocol = eth_type_trans(skb, dev); netif_rx(skb); dev->last_rx = jiffies; diff -puN drivers/net/bnx2.c~git-net drivers/net/bnx2.c --- a/drivers/net/bnx2.c~git-net +++ a/drivers/net/bnx2.c @@ -1884,10 +1884,8 @@ bnx2_rx_int(struct bnx2 *bp, int budget) goto reuse_rx; /* aligned copy */ - memcpy(new_skb->data, - skb->data + bp->rx_offset - 2, - len + 2); - + skb_copy_from_linear_data_offset(skb, bp->rx_offset - 2, + new_skb->data, len + 2); skb_reserve(new_skb, 2); skb_put(new_skb, len); @@ -4510,6 +4508,7 @@ bnx2_start_xmit(struct sk_buff *skb, str if ((mss = skb_shinfo(skb)->gso_size) && (skb->len > (bp->dev->mtu + ETH_HLEN))) { u32 tcp_opt_len, ip_tcp_len; + struct iphdr *iph; if (skb_header_cloned(skb) && pskb_expand_head(skb, 0, 0, GFP_ATOMIC)) { @@ -4517,25 +4516,23 @@ bnx2_start_xmit(struct sk_buff *skb, str return NETDEV_TX_OK; } - tcp_opt_len = ((skb->h.th->doff - 5) * 4); vlan_tag_flags |= TX_BD_FLAGS_SW_LSO; tcp_opt_len = 0; - if (skb->h.th->doff > 5) { - tcp_opt_len = (skb->h.th->doff - 5) << 2; - } - ip_tcp_len = (skb->nh.iph->ihl << 2) + sizeof(struct tcphdr); + if (tcp_hdr(skb)->doff > 5) + tcp_opt_len = tcp_optlen(skb); + + ip_tcp_len = ip_hdrlen(skb) + sizeof(struct tcphdr); - skb->nh.iph->check = 0; - skb->nh.iph->tot_len = htons(mss + ip_tcp_len + tcp_opt_len); - skb->h.th->check = - ~csum_tcpudp_magic(skb->nh.iph->saddr, - skb->nh.iph->daddr, - 0, IPPROTO_TCP, 0); - - if (tcp_opt_len || (skb->nh.iph->ihl > 5)) { - vlan_tag_flags |= ((skb->nh.iph->ihl - 5) + - (tcp_opt_len >> 2)) << 8; + iph = ip_hdr(skb); + iph->check = 0; + iph->tot_len = htons(mss + ip_tcp_len + tcp_opt_len); + tcp_hdr(skb)->check = ~csum_tcpudp_magic(iph->saddr, + iph->daddr, 0, + IPPROTO_TCP, 0); + if (tcp_opt_len || (iph->ihl > 5)) { + vlan_tag_flags |= ((iph->ihl - 5) + + (tcp_opt_len >> 2)) << 8; } } else diff -puN drivers/net/bonding/bond_3ad.c~git-net drivers/net/bonding/bond_3ad.c --- a/drivers/net/bonding/bond_3ad.c~git-net +++ a/drivers/net/bonding/bond_3ad.c @@ -884,8 +884,8 @@ static int ad_lacpdu_send(struct port *p } skb->dev = slave->dev; - skb->mac.raw = skb->data; - skb->nh.raw = skb->data + ETH_HLEN; + skb_reset_mac_header(skb); + skb->network_header = skb->mac_header + ETH_HLEN; skb->protocol = PKT_TYPE_LACPDU; skb->priority = TC_PRIO_CONTROL; @@ -928,8 +928,8 @@ static int ad_marker_send(struct port *p skb_reserve(skb, 16); skb->dev = slave->dev; - skb->mac.raw = skb->data; - skb->nh.raw = skb->data + ETH_HLEN; + skb_reset_mac_header(skb); + skb->network_header = skb->mac_header + ETH_HLEN; skb->protocol = PKT_TYPE_LACPDU; marker_header = (struct marker_header *)skb_put(skb, length); diff -puN drivers/net/bonding/bond_alb.c~git-net drivers/net/bonding/bond_alb.c --- a/drivers/net/bonding/bond_alb.c~git-net +++ a/drivers/net/bonding/bond_alb.c @@ -104,10 +104,15 @@ struct arp_pkt { }; #pragma pack() +static inline struct arp_pkt *arp_pkt(const struct sk_buff *skb) +{ + return (struct arp_pkt *)skb_network_header(skb); +} + /* Forward declaration */ static void alb_send_learning_packets(struct slave *slave, u8 mac_addr[]); -static inline u8 _simple_hash(u8 *hash_start, int hash_size) +static inline u8 _simple_hash(const u8 *hash_start, int hash_size) { int i; u8 hash = 0; @@ -613,7 +618,7 @@ static void rlb_req_update_subnet_client static struct slave *rlb_choose_channel(struct sk_buff *skb, struct bonding *bond) { struct alb_bond_info *bond_info = &(BOND_ALB_INFO(bond)); - struct arp_pkt *arp = (struct arp_pkt *)skb->nh.raw; + struct arp_pkt *arp = arp_pkt(skb); struct slave *assigned_slave; struct rlb_client_info *client_info; u32 hash_index = 0; @@ -701,7 +706,7 @@ static struct slave *rlb_choose_channel( */ static struct slave *rlb_arp_xmit(struct sk_buff *skb, struct bonding *bond) { - struct arp_pkt *arp = (struct arp_pkt *)skb->nh.raw; + struct arp_pkt *arp = arp_pkt(skb); struct slave *tx_slave = NULL; if (arp->op_code == __constant_htons(ARPOP_REPLY)) { @@ -890,8 +895,8 @@ static void alb_send_learning_packets(st data = skb_put(skb, size); memcpy(data, &pkt, size); - skb->mac.raw = data; - skb->nh.raw = data + ETH_HLEN; + skb_reset_mac_header(skb); + skb->network_header = skb->mac_header + ETH_HLEN; skb->protocol = pkt.type; skb->priority = TC_PRIO_CONTROL; skb->dev = slave->dev; @@ -1263,10 +1268,10 @@ int bond_alb_xmit(struct sk_buff *skb, s int hash_size = 0; int do_tx_balance = 1; u32 hash_index = 0; - u8 *hash_start = NULL; + const u8 *hash_start = NULL; int res = 1; - skb->mac.raw = (unsigned char *)skb->data; + skb_reset_mac_header(skb); eth_data = eth_hdr(skb); /* make sure that the curr_active_slave and the slaves list do @@ -1280,15 +1285,18 @@ int bond_alb_xmit(struct sk_buff *skb, s } switch (ntohs(skb->protocol)) { - case ETH_P_IP: + case ETH_P_IP: { + const struct iphdr *iph = ip_hdr(skb); + if ((memcmp(eth_data->h_dest, mac_bcast, ETH_ALEN) == 0) || - (skb->nh.iph->daddr == ip_bcast) || - (skb->nh.iph->protocol == IPPROTO_IGMP)) { + (iph->daddr == ip_bcast) || + (iph->protocol == IPPROTO_IGMP)) { do_tx_balance = 0; break; } - hash_start = (char*)&(skb->nh.iph->daddr); - hash_size = sizeof(skb->nh.iph->daddr); + hash_start = (char *)&(iph->daddr); + hash_size = sizeof(iph->daddr); + } break; case ETH_P_IPV6: if (memcmp(eth_data->h_dest, mac_bcast, ETH_ALEN) == 0) { @@ -1296,8 +1304,8 @@ int bond_alb_xmit(struct sk_buff *skb, s break; } - hash_start = (char*)&(skb->nh.ipv6h->daddr); - hash_size = sizeof(skb->nh.ipv6h->daddr); + hash_start = (char *)&(ipv6_hdr(skb)->daddr); + hash_size = sizeof(ipv6_hdr(skb)->daddr); break; case ETH_P_IPX: if (ipx_hdr(skb)->ipx_checksum != diff -puN drivers/net/bonding/bond_main.c~git-net drivers/net/bonding/bond_main.c --- a/drivers/net/bonding/bond_main.c~git-net +++ a/drivers/net/bonding/bond_main.c @@ -2524,7 +2524,7 @@ static int bond_arp_rcv(struct sk_buff * (2 * sizeof(u32))))) goto out_unlock; - arp = skb->nh.arph; + arp = arp_hdr(skb); if (arp->ar_hln != dev->addr_len || skb->pkt_type == PACKET_OTHERHOST || skb->pkt_type == PACKET_LOOPBACK || @@ -3476,7 +3476,7 @@ static int bond_xmit_hash_policy_l34(str struct net_device *bond_dev, int count) { struct ethhdr *data = (struct ethhdr *)skb->data; - struct iphdr *iph = skb->nh.iph; + struct iphdr *iph = ip_hdr(skb); u16 *layer4hdr = (u16 *)((u32 *)iph + iph->ihl); int layer4_xor = 0; @@ -3640,9 +3640,8 @@ static struct net_device_stats *bond_get read_lock_bh(&bond->lock); bond_for_each_slave(bond, slave, i) { - if (slave->dev->get_stats) { - sstats = slave->dev->get_stats(slave->dev); - + sstats = slave->dev->get_stats(slave->dev); + if (sstats) { stats->rx_packets += sstats->rx_packets; stats->rx_bytes += sstats->rx_bytes; stats->rx_errors += sstats->rx_errors; diff -puN drivers/net/cassini.c~git-net drivers/net/cassini.c --- a/drivers/net/cassini.c~git-net +++ a/drivers/net/cassini.c @@ -1995,7 +1995,6 @@ static int cas_rx_process_pkt(struct cas return -1; *skbref = skb; - skb->dev = cp->dev; skb_reserve(skb, swivel); p = skb->data; @@ -2822,10 +2821,8 @@ static inline int cas_xmit_tx_ringN(stru ctrl = 0; if (skb->ip_summed == CHECKSUM_PARTIAL) { - u64 csum_start_off, csum_stuff_off; - - csum_start_off = (u64) (skb->h.raw - skb->data); - csum_stuff_off = csum_start_off + skb->csum_offset; + const u64 csum_start_off = skb_transport_offset(skb); + const u64 csum_stuff_off = csum_start_off + skb->csum_offset; ctrl = TX_DESC_CSUM_EN | CAS_BASE(TX_DESC_CSUM_START, csum_start_off) | @@ -2849,8 +2846,8 @@ static inline int cas_xmit_tx_ringN(stru ctrl | TX_DESC_SOF, 0); entry = TX_DESC_NEXT(ring, entry); - memcpy(tx_tiny_buf(cp, ring, entry), skb->data + - len - tabort, tabort); + skb_copy_from_linear_data_offset(skb, len - tabort, + tx_tiny_buf(cp, ring, entry), tabort); mapping = tx_tiny_map(cp, ring, entry, tentry); cas_write_txd(cp, ring, entry, mapping, tabort, ctrl, (nr_frags == 0)); diff -puN drivers/net/chelsio/sge.c~git-net drivers/net/chelsio/sge.c --- a/drivers/net/chelsio/sge.c~git-net +++ a/drivers/net/chelsio/sge.c @@ -1062,7 +1062,7 @@ static inline struct sk_buff *get_packet pci_unmap_addr(ce, dma_addr), pci_unmap_len(ce, dma_len), PCI_DMA_FROMDEVICE); - memcpy(skb->data, ce->skb->data, len); + skb_copy_from_linear_data(ce->skb, skb->data, len); pci_dma_sync_single_for_device(pdev, pci_unmap_addr(ce, dma_addr), pci_unmap_len(ce, dma_len), @@ -1379,12 +1379,11 @@ static void sge_rx(struct sge *sge, stru } __skb_pull(skb, sizeof(*p)); - skb->dev = adapter->port[p->iff].dev; skb->dev->last_rx = jiffies; st = per_cpu_ptr(sge->port_stats[p->iff], smp_processor_id()); st->rx_packets++; - skb->protocol = eth_type_trans(skb, skb->dev); + skb->protocol = eth_type_trans(skb, adapter->port[p->iff].dev); if ((adapter->flags & RX_CSUM_ENABLED) && p->csum == 0xffff && skb->protocol == htons(ETH_P_IP) && (skb->data[9] == IPPROTO_TCP || skb->data[9] == IPPROTO_UDP)) { @@ -1866,14 +1865,14 @@ int t1_start_xmit(struct sk_buff *skb, s ++st->tx_tso; - eth_type = skb->nh.raw - skb->data == ETH_HLEN ? + eth_type = skb_network_offset(skb) == ETH_HLEN ? CPL_ETH_II : CPL_ETH_II_VLAN; hdr = (struct cpl_tx_pkt_lso *)skb_push(skb, sizeof(*hdr)); hdr->opcode = CPL_TX_PKT_LSO; hdr->ip_csum_dis = hdr->l4_csum_dis = 0; - hdr->ip_hdr_words = skb->nh.iph->ihl; - hdr->tcp_hdr_words = skb->h.th->doff; + hdr->ip_hdr_words = ip_hdr(skb)->ihl; + hdr->tcp_hdr_words = tcp_hdr(skb)->doff; hdr->eth_type_mss = htons(MK_ETH_TYPE_MSS(eth_type, skb_shinfo(skb)->gso_size)); hdr->len = htonl(skb->len - sizeof(*hdr)); @@ -1913,7 +1912,7 @@ int t1_start_xmit(struct sk_buff *skb, s if (!(adapter->flags & UDP_CSUM_CAPABLE) && skb->ip_summed == CHECKSUM_PARTIAL && - skb->nh.iph->protocol == IPPROTO_UDP) { + ip_hdr(skb)->protocol == IPPROTO_UDP) { if (unlikely(skb_checksum_help(skb))) { pr_debug("%s: unable to do udp checksum\n", dev->name); dev_kfree_skb_any(skb); @@ -1926,7 +1925,7 @@ int t1_start_xmit(struct sk_buff *skb, s */ if ((unlikely(!adapter->sge->espibug_skb[dev->if_port]))) { if (skb->protocol == htons(ETH_P_ARP) && - skb->nh.arph->ar_op == htons(ARPOP_REQUEST)) { + arp_hdr(skb)->ar_op == htons(ARPOP_REQUEST)) { adapter->sge->espibug_skb[dev->if_port] = skb; /* We want to re-use this skb later. We * simply bump the reference count and it @@ -2096,10 +2095,14 @@ static void espibug_workaround_t204(unsi 0x0, 0x7, 0x43, 0x0, 0x0, 0x0 }; - memcpy(skb->data + sizeof(struct cpl_tx_pkt), - ch_mac_addr, ETH_ALEN); - memcpy(skb->data + skb->len - 10, - ch_mac_addr, ETH_ALEN); + skb_copy_to_linear_data_offset(skb, + sizeof(struct cpl_tx_pkt), + ch_mac_addr, + ETH_ALEN); + skb_copy_to_linear_data_offset(skb, + skb->len - 10, + ch_mac_addr, + ETH_ALEN); skb->cb[0] = 0xff; } @@ -2126,10 +2129,14 @@ static void espibug_workaround(unsigned if (!skb->cb[0]) { u8 ch_mac_addr[ETH_ALEN] = {0x0, 0x7, 0x43, 0x0, 0x0, 0x0}; - memcpy(skb->data + sizeof(struct cpl_tx_pkt), - ch_mac_addr, ETH_ALEN); - memcpy(skb->data + skb->len - 10, ch_mac_addr, - ETH_ALEN); + skb_copy_to_linear_data_offset(skb, + sizeof(struct cpl_tx_pkt), + ch_mac_addr, + ETH_ALEN); + skb_copy_to_linear_data_offset(skb, + skb->len - 10, + ch_mac_addr, + ETH_ALEN); skb->cb[0] = 0xff; } diff -puN drivers/net/cris/eth_v10.c~git-net drivers/net/cris/eth_v10.c --- a/drivers/net/cris/eth_v10.c~git-net +++ a/drivers/net/cris/eth_v10.c @@ -1348,7 +1348,8 @@ e100_rx(struct net_device *dev) #ifdef ETHDEBUG printk("head = 0x%x, data = 0x%x, tail = 0x%x, end = 0x%x\n", - skb->head, skb->data, skb->tail, skb->end); + skb->head, skb->data, skb_tail_pointer(skb), + skb_end_pointer(skb)); printk("copying packet to 0x%x.\n", skb_data_ptr); #endif @@ -1375,7 +1376,6 @@ e100_rx(struct net_device *dev) myNextRxDesc->descr.buf = L1_CACHE_ALIGN(virt_to_phys(myNextRxDesc->skb->data)); } - skb->dev = dev; skb->protocol = eth_type_trans(skb, dev); /* Send the packet to the upper layers */ diff -puN drivers/net/cs89x0.c~git-net drivers/net/cs89x0.c --- a/drivers/net/cs89x0.c~git-net +++ a/drivers/net/cs89x0.c @@ -1004,7 +1004,6 @@ skip_this_frame: return; } skb_reserve(skb, 2); /* longword align L3 header */ - skb->dev = dev; if (bp + length > lp->end_dma_buff) { int semi_cnt = lp->end_dma_buff - bp; @@ -1702,7 +1701,6 @@ net_rx(struct net_device *dev) return; } skb_reserve(skb, 2); /* longword align L3 header */ - skb->dev = dev; readwords(ioaddr, RX_FRAME_PORT, skb_put(skb, length), length >> 1); if (length & 1) diff -puN drivers/net/cxgb3/cxgb3_offload.c~git-net drivers/net/cxgb3/cxgb3_offload.c --- a/drivers/net/cxgb3/cxgb3_offload.c~git-net +++ a/drivers/net/cxgb3/cxgb3_offload.c @@ -783,7 +783,7 @@ static int do_trace(struct t3cdev *dev, skb->protocol = htons(0xffff); skb->dev = dev->lldev; skb_pull(skb, sizeof(*p)); - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); netif_receive_skb(skb); return 0; } diff -puN drivers/net/cxgb3/sge.c~git-net drivers/net/cxgb3/sge.c --- a/drivers/net/cxgb3/sge.c~git-net +++ a/drivers/net/cxgb3/sge.c @@ -661,7 +661,7 @@ static inline struct sk_buff *get_imm_pa if (skb) { __skb_put(skb, IMMED_PKT_SIZE); - memcpy(skb->data, resp->imm_data, IMMED_PKT_SIZE); + skb_copy_to_linear_data(skb, resp->imm_data, IMMED_PKT_SIZE); } return skb; } @@ -897,11 +897,11 @@ static void write_tx_pkt_wr(struct adapt d->flit[2] = 0; cntrl |= V_TXPKT_OPCODE(CPL_TX_PKT_LSO); hdr->cntrl = htonl(cntrl); - eth_type = skb->nh.raw - skb->data == ETH_HLEN ? + eth_type = skb_network_offset(skb) == ETH_HLEN ? CPL_ETH_II : CPL_ETH_II_VLAN; tso_info |= V_LSO_ETH_TYPE(eth_type) | - V_LSO_IPHDR_WORDS(skb->nh.iph->ihl) | - V_LSO_TCPHDR_WORDS(skb->h.th->doff); + V_LSO_IPHDR_WORDS(ip_hdr(skb)->ihl) | + V_LSO_TCPHDR_WORDS(tcp_hdr(skb)->doff); hdr->lso_info = htonl(tso_info); flits = 3; } else { @@ -913,7 +913,8 @@ static void write_tx_pkt_wr(struct adapt if (skb->len <= WR_LEN - sizeof(*cpl)) { q->sdesc[pidx].skb = NULL; if (!skb->data_len) - memcpy(&d->flit[2], skb->data, skb->len); + skb_copy_from_linear_data(skb, &d->flit[2], + skb->len); else skb_copy_bits(skb, 0, &d->flit[2], skb->len); @@ -1319,16 +1320,19 @@ static void write_ofld_wr(struct adapter /* Only TX_DATA builds SGLs */ from = (struct work_request_hdr *)skb->data; - memcpy(&d->flit[1], &from[1], skb->h.raw - skb->data - sizeof(*from)); + memcpy(&d->flit[1], &from[1], + skb_transport_offset(skb) - sizeof(*from)); - flits = (skb->h.raw - skb->data) / 8; + flits = skb_transport_offset(skb) / 8; sgp = ndesc == 1 ? (struct sg_ent *)&d->flit[flits] : sgl; - sgl_flits = make_sgl(skb, sgp, skb->h.raw, skb->tail - skb->h.raw, + sgl_flits = make_sgl(skb, sgp, skb_transport_header(skb), + skb->tail - skb->transport_header, adap->pdev); if (need_skb_unmap()) { setup_deferred_unmapping(skb, adap->pdev, sgp, sgl_flits); skb->destructor = deferred_unmap_destructor; - ((struct unmap_info *)skb->cb)->len = skb->tail - skb->h.raw; + ((struct unmap_info *)skb->cb)->len = (skb->tail - + skb->transport_header); } write_wr_hdr_sgl(ndesc, skb, d, pidx, q, sgl, flits, sgl_flits, @@ -1349,8 +1353,8 @@ static inline unsigned int calc_tx_descs if (skb->len <= WR_LEN && cnt == 0) return 1; /* packet fits as immediate data */ - flits = (skb->h.raw - skb->data) / 8; /* headers */ - if (skb->tail != skb->h.raw) + flits = skb_transport_offset(skb) / 8; /* headers */ + if (skb->tail != skb->transport_header) cnt++; return flits_to_desc(flits + sgl_len(cnt)); } @@ -1620,7 +1624,9 @@ static inline int rx_offload(struct t3cd unsigned int gather_idx) { rq->offload_pkts++; - skb->mac.raw = skb->nh.raw = skb->h.raw = skb->data; + skb_reset_mac_header(skb); + skb_reset_network_header(skb); + skb_reset_transport_header(skb); if (rq->polling) { rx_gather[gather_idx++] = skb; @@ -1684,9 +1690,8 @@ static void rx_eth(struct adapter *adap, struct port_info *pi; skb_pull(skb, sizeof(*p) + pad); - skb->dev = adap->port[p->iff]; skb->dev->last_rx = jiffies; - skb->protocol = eth_type_trans(skb, skb->dev); + skb->protocol = eth_type_trans(skb, adap->port[p->iff]); pi = netdev_priv(skb->dev); if (pi->rx_csum_offload && p->csum_valid && p->csum == 0xffff && !p->fragment) { @@ -1717,11 +1722,11 @@ static void skb_data_init(struct sk_buff { skb->len = len; if (len <= SKB_DATA_SIZE) { - memcpy(skb->data, p->va, len); + skb_copy_to_linear_data(skb, p->va, len); skb->tail += len; put_page(p->frag.page); } else { - memcpy(skb->data, p->va, SKB_DATA_SIZE); + skb_copy_to_linear_data(skb, p->va, SKB_DATA_SIZE); skb_shinfo(skb)->frags[0].page = p->frag.page; skb_shinfo(skb)->frags[0].page_offset = p->frag.page_offset + SKB_DATA_SIZE; @@ -1767,7 +1772,7 @@ static struct sk_buff *get_packet(struct __skb_put(skb, len); pci_dma_sync_single_for_cpu(adap->pdev, mapping, len, PCI_DMA_FROMDEVICE); - memcpy(skb->data, sd->t.skb->data, len); + skb_copy_from_linear_data(sd->t.skb, skb->data, len); pci_dma_sync_single_for_device(adap->pdev, mapping, len, PCI_DMA_FROMDEVICE); } else if (!drop_thres) diff -puN drivers/net/de600.c~git-net drivers/net/de600.c --- a/drivers/net/de600.c~git-net +++ a/drivers/net/de600.c @@ -359,7 +359,6 @@ static void de600_rx_intr(struct net_dev } /* else */ - skb->dev = dev; skb_reserve(skb,2); /* Align */ /* 'skb->data' points to the start of sk_buff data area. */ diff -puN drivers/net/de620.c~git-net drivers/net/de620.c --- a/drivers/net/de620.c~git-net +++ a/drivers/net/de620.c @@ -697,7 +697,6 @@ static int de620_rx_intr(struct net_devi } else { /* Yep! Go get it! */ skb_reserve(skb,2); /* Align */ - skb->dev = dev; /* skb->data points to the start of sk_buff data area */ buffer = skb_put(skb,size); /* copy the packet into the buffer */ diff -puN drivers/net/declance.c~git-net drivers/net/declance.c --- a/drivers/net/declance.c~git-net +++ a/drivers/net/declance.c @@ -616,7 +616,6 @@ static int lance_rx(struct net_device *d } lp->stats.rx_bytes += len; - skb->dev = dev; skb_reserve(skb, 2); /* 16 byte align */ skb_put(skb, len); /* make room */ diff -puN drivers/net/defxx.c~git-net drivers/net/defxx.c --- a/drivers/net/defxx.c~git-net +++ a/drivers/net/defxx.c @@ -3091,13 +3091,13 @@ static void dfx_rcv_queue_process( { /* Receive buffer allocated, pass receive packet up */ - memcpy(skb->data, p_buff + RCV_BUFF_K_PADDING, pkt_len+3); + skb_copy_to_linear_data(skb, + p_buff + RCV_BUFF_K_PADDING, + pkt_len + 3); } skb_reserve(skb,3); /* adjust data field so that it points to FC byte */ skb_put(skb, pkt_len); /* pass up packet length, NOT including CRC */ - skb->dev = bp->dev; /* pass up device pointer */ - skb->protocol = fddi_type_trans(skb, bp->dev); bp->rcv_total_bytes += skb->len; netif_rx(skb); diff -puN drivers/net/depca.c~git-net drivers/net/depca.c --- a/drivers/net/depca.c~git-net +++ a/drivers/net/depca.c @@ -1044,7 +1044,6 @@ static int depca_rx(struct net_device *d unsigned char *buf; skb_reserve(skb, 2); /* 16 byte align the IP header */ buf = skb_put(skb, pkt_len); - skb->dev = dev; if (entry < lp->rx_old) { /* Wrapped buffer */ len = (lp->rxRingMask - lp->rx_old + 1) * RX_BUFF_SZ; memcpy_fromio(buf, lp->rx_buff[lp->rx_old], len); diff -puN drivers/net/dgrs.c~git-net drivers/net/dgrs.c --- a/drivers/net/dgrs.c~git-net +++ a/drivers/net/dgrs.c @@ -503,7 +503,6 @@ dgrs_rcv_frame( /* discarding the frame */ goto out; } - skb->dev = devN; skb_reserve(skb, 2); /* Align IP header */ again: @@ -742,7 +741,7 @@ static int dgrs_start_xmit(struct sk_buf } amt = min_t(unsigned int, len, rbdp->size - count); - memcpy( (char *) S2H(rbdp->buf) + count, skb->data + i, amt); + skb_copy_from_linear_data_offset(skb, i, S2H(rbdp->buf) + count, amt); i += amt; count += amt; len -= amt; diff -puN drivers/net/dl2k.c~git-net drivers/net/dl2k.c --- a/drivers/net/dl2k.c~git-net +++ a/drivers/net/dl2k.c @@ -504,7 +504,6 @@ rio_timer (unsigned long data) break; } np->rx_skbuff[entry] = skb; - skb->dev = dev; /* 16 byte align the IP header */ skb_reserve (skb, 2); np->rx_ring[entry].fraginfo = @@ -575,7 +574,6 @@ alloc_list (struct net_device *dev) dev->name); break; } - skb->dev = dev; /* Mark as being used by this device. */ skb_reserve (skb, 2); /* 16 byte align the IP header. */ /* Rubicon now supports 40 bits of addressing space. */ np->rx_ring[i].fraginfo = @@ -866,7 +864,6 @@ receive_packet (struct net_device *dev) DMA_48BIT_MASK, np->rx_buf_sz, PCI_DMA_FROMDEVICE); - skb->dev = dev; /* 16 byte align the IP header */ skb_reserve (skb, 2); eth_copy_and_sum (skb, @@ -910,7 +907,6 @@ receive_packet (struct net_device *dev) break; } np->rx_skbuff[entry] = skb; - skb->dev = dev; /* 16 byte align the IP header */ skb_reserve (skb, 2); np->rx_ring[entry].fraginfo = diff -puN drivers/net/dm9000.c~git-net drivers/net/dm9000.c --- a/drivers/net/dm9000.c~git-net +++ a/drivers/net/dm9000.c @@ -954,7 +954,6 @@ dm9000_rx(struct net_device *dev) /* Move data from DM9000 */ if (GoodPacket && ((skb = dev_alloc_skb(RxLen + 4)) != NULL)) { - skb->dev = dev; skb_reserve(skb, 2); rdptr = (u8 *) skb_put(skb, RxLen - 4); diff -puN drivers/net/e100.c~git-net drivers/net/e100.c --- a/drivers/net/e100.c~git-net +++ a/drivers/net/e100.c @@ -1754,7 +1754,7 @@ static int e100_rx_alloc_skb(struct nic /* Align, init, and map the RFD. */ skb_reserve(rx->skb, NET_IP_ALIGN); - memcpy(rx->skb->data, &nic->blank_rfd, sizeof(struct rfd)); + skb_copy_to_linear_data(rx->skb, &nic->blank_rfd, sizeof(struct rfd)); rx->dma_addr = pci_map_single(nic->pdev, rx->skb->data, RFD_BUF_LEN, PCI_DMA_BIDIRECTIONAL); diff -puN drivers/net/e1000/e1000_main.c~git-net drivers/net/e1000/e1000_main.c --- a/drivers/net/e1000/e1000_main.c~git-net +++ a/drivers/net/e1000/e1000_main.c @@ -3035,7 +3035,7 @@ e1000_tx_csum(struct e1000_adapter *adap u8 css; if (likely(skb->ip_summed == CHECKSUM_PARTIAL)) { - css = skb->h.raw - skb->data; + css = skb_transport_offset(skb); i = tx_ring->next_to_use; buffer_info = &tx_ring->buffer_info[i]; @@ -3383,7 +3383,7 @@ e1000_xmit_frame(struct sk_buff *skb, st /* TSO Workaround for 82571/2/3 Controllers -- if skb->data * points to just header, pull a few bytes of payload from * frags into skb->data */ - hdr_len = ((skb->h.raw - skb->data) + (skb->h.th->doff << 2)); + hdr_len = skb_transport_offset(skb) + tcp_hdrlen(skb); if (skb->data_len && (hdr_len == (skb->len - skb->data_len))) { switch (adapter->hw.mac.type) { unsigned int pull_size; @@ -3394,7 +3394,7 @@ e1000_xmit_frame(struct sk_buff *skb, st * NOTE: this is a TSO only workaround * if end byte alignment not correct move us * into the next dword */ - if ((unsigned long)(skb->tail - 1) & 4) + if ((unsigned long)(skb_tail_pointer(skb) - 1) & 4) break; /* fall through */ case e1000_82571: @@ -4344,9 +4344,12 @@ e1000_clean_rx_irq(struct e1000_adapter netdev_alloc_skb(netdev, length + NET_IP_ALIGN); if (new_skb) { skb_reserve(new_skb, NET_IP_ALIGN); - memcpy(new_skb->data - NET_IP_ALIGN, - skb->data - NET_IP_ALIGN, - length + NET_IP_ALIGN); + skb_copy_to_linear_data_offset(new_skb, + -NET_IP_ALIGN, + (skb->data - + NET_IP_ALIGN), + (length + + NET_IP_ALIGN)); /* save the skb in buffer_info as good */ buffer_info->skb = skb; skb = new_skb; @@ -4509,7 +4512,7 @@ e1000_clean_rx_irq_ps(struct e1000_adapt PCI_DMA_FROMDEVICE); vaddr = kmap_atomic(ps_page->ps_page[0], KM_SKB_DATA_SOFTIRQ); - memcpy(skb->tail, vaddr, l1); + memcpy(skb_tail_pointer(skb), vaddr, l1); kunmap_atomic(vaddr, KM_SKB_DATA_SOFTIRQ); pci_dma_sync_single_for_device(pdev, ps_page_dma->ps_page_dma[0], diff -puN drivers/net/eepro.c~git-net drivers/net/eepro.c --- a/drivers/net/eepro.c~git-net +++ a/drivers/net/eepro.c @@ -1591,7 +1591,6 @@ eepro_rx(struct net_device *dev) break; } - skb->dev = dev; skb_reserve(skb,2); if (lp->version == LAN595) diff -puN drivers/net/eepro100.c~git-net drivers/net/eepro100.c --- a/drivers/net/eepro100.c~git-net +++ a/drivers/net/eepro100.c @@ -1793,7 +1793,6 @@ speedo_rx(struct net_device *dev) copying to a properly sized skbuff. */ if (pkt_len < rx_copybreak && (skb = dev_alloc_skb(pkt_len + 2)) != 0) { - skb->dev = dev; skb_reserve(skb, 2); /* Align IP on 16 byte boundaries */ /* 'skb_put()' points to the start of sk_buff data area. */ pci_dma_sync_single_for_cpu(sp->pdev, sp->rx_ring_dma[entry], @@ -1805,8 +1804,9 @@ speedo_rx(struct net_device *dev) eth_copy_and_sum(skb, sp->rx_skbuff[entry]->data, pkt_len, 0); skb_put(skb, pkt_len); #else - memcpy(skb_put(skb, pkt_len), sp->rx_skbuff[entry]->data, - pkt_len); + skb_copy_from_linear_data(sp->rx_skbuff[entry], + skb_put(skb, pkt_len), + pkt_len); #endif pci_dma_sync_single_for_device(sp->pdev, sp->rx_ring_dma[entry], sizeof(struct RxFD) + pkt_len, diff -puN drivers/net/eexpress.c~git-net drivers/net/eexpress.c --- a/drivers/net/eexpress.c~git-net +++ a/drivers/net/eexpress.c @@ -977,7 +977,6 @@ static void eexp_hw_rx_pio(struct net_de lp->stats.rx_dropped++; break; } - skb->dev = dev; skb_reserve(skb, 2); outw(pbuf+10, ioaddr+READ_PTR); insw(ioaddr+DATAPORT, skb_put(skb,pkt_len),(pkt_len+1)>>1); diff -puN drivers/net/ehea/ehea_main.c~git-net drivers/net/ehea/ehea_main.c --- a/drivers/net/ehea/ehea_main.c~git-net +++ a/drivers/net/ehea/ehea_main.c @@ -1267,8 +1267,8 @@ static int ehea_clean_portres(struct ehe static inline void write_ip_start_end(struct ehea_swqe *swqe, const struct sk_buff *skb) { - swqe->ip_start = (u8)(((u64)skb->nh.iph) - ((u64)skb->data)); - swqe->ip_end = (u8)(swqe->ip_start + skb->nh.iph->ihl * 4 - 1); + swqe->ip_start = skb_network_offset(skb); + swqe->ip_end = (u8)(swqe->ip_start + ip_hdrlen(skb) - 1); } static inline void write_tcp_offset_end(struct ehea_swqe *swqe, @@ -1305,13 +1305,13 @@ static void write_swqe2_TSO(struct sk_bu /* copy only eth/ip/tcp headers to immediate data and * the rest of skb->data to sg1entry */ - headersize = ETH_HLEN + (skb->nh.iph->ihl * 4) + (skb->h.th->doff * 4); + headersize = ETH_HLEN + ip_hdrlen(skb) + tcp_hdrlen(skb); skb_data_size = skb->len - skb->data_len; if (skb_data_size >= headersize) { /* copy immediate data */ - memcpy(imm_data, skb->data, headersize); + skb_copy_from_linear_data(skb, imm_data, headersize); swqe->immediate_data_length = headersize; if (skb_data_size > headersize) { @@ -1342,7 +1342,7 @@ static void write_swqe2_nonTSO(struct sk */ if (skb_data_size >= SWQE2_MAX_IMM) { /* copy immediate data */ - memcpy(imm_data, skb->data, SWQE2_MAX_IMM); + skb_copy_from_linear_data(skb, imm_data, SWQE2_MAX_IMM); swqe->immediate_data_length = SWQE2_MAX_IMM; @@ -1355,7 +1355,7 @@ static void write_swqe2_nonTSO(struct sk swqe->descriptors++; } } else { - memcpy(imm_data, skb->data, skb_data_size); + skb_copy_from_linear_data(skb, imm_data, skb_data_size); swqe->immediate_data_length = skb_data_size; } } @@ -1693,6 +1693,7 @@ static void ehea_xmit2(struct sk_buff *s struct ehea_swqe *swqe, u32 lkey) { if (skb->protocol == htons(ETH_P_IP)) { + const struct iphdr *iph = ip_hdr(skb); /* IPv4 */ swqe->tx_control |= EHEA_SWQE_CRC | EHEA_SWQE_IP_CHECKSUM @@ -1702,15 +1703,15 @@ static void ehea_xmit2(struct sk_buff *s write_ip_start_end(swqe, skb); - if (skb->nh.iph->protocol == IPPROTO_UDP) { - if ((skb->nh.iph->frag_off & IP_MF) || - (skb->nh.iph->frag_off & IP_OFFSET)) + if (iph->protocol == IPPROTO_UDP) { + if ((iph->frag_off & IP_MF) || + (iph->frag_off & IP_OFFSET)) /* IP fragment, so don't change cs */ swqe->tx_control &= ~EHEA_SWQE_TCP_CHECKSUM; else write_udp_offset_end(swqe, skb); - } else if (skb->nh.iph->protocol == IPPROTO_TCP) { + } else if (iph->protocol == IPPROTO_TCP) { write_tcp_offset_end(swqe, skb); } @@ -1736,10 +1737,11 @@ static void ehea_xmit3(struct sk_buff *s int i; if (skb->protocol == htons(ETH_P_IP)) { + const struct iphdr *iph = ip_hdr(skb); /* IPv4 */ write_ip_start_end(swqe, skb); - if (skb->nh.iph->protocol == IPPROTO_TCP) { + if (iph->protocol == IPPROTO_TCP) { swqe->tx_control |= EHEA_SWQE_CRC | EHEA_SWQE_IP_CHECKSUM | EHEA_SWQE_TCP_CHECKSUM @@ -1747,9 +1749,9 @@ static void ehea_xmit3(struct sk_buff *s write_tcp_offset_end(swqe, skb); - } else if (skb->nh.iph->protocol == IPPROTO_UDP) { - if ((skb->nh.iph->frag_off & IP_MF) || - (skb->nh.iph->frag_off & IP_OFFSET)) + } else if (iph->protocol == IPPROTO_UDP) { + if ((iph->frag_off & IP_MF) || + (iph->frag_off & IP_OFFSET)) /* IP fragment, so don't change cs */ swqe->tx_control |= EHEA_SWQE_CRC | EHEA_SWQE_IMM_DATA_PRESENT; @@ -1775,10 +1777,11 @@ static void ehea_xmit3(struct sk_buff *s /* copy (immediate) data */ if (nfrags == 0) { /* data is in a single piece */ - memcpy(imm_data, skb->data, skb->len); + skb_copy_from_linear_data(skb, imm_data, skb->len); } else { /* first copy data from the skb->data buffer ... */ - memcpy(imm_data, skb->data, skb->len - skb->data_len); + skb_copy_from_linear_data(skb, imm_data, + skb->len - skb->data_len); imm_data += skb->len - skb->data_len; /* ... then copy data from the fragments */ diff -puN drivers/net/epic100.c~git-net drivers/net/epic100.c --- a/drivers/net/epic100.c~git-net +++ a/drivers/net/epic100.c @@ -934,7 +934,6 @@ static void epic_init_ring(struct net_de ep->rx_skbuff[i] = skb; if (skb == NULL) break; - skb->dev = dev; /* Mark as being used by this device. */ skb_reserve(skb, 2); /* 16 byte align the IP header. */ ep->rx_ring[i].bufaddr = pci_map_single(ep->pci_dev, skb->data, ep->rx_buf_sz, PCI_DMA_FROMDEVICE); @@ -1199,7 +1198,6 @@ static int epic_rx(struct net_device *de to a minimally-sized skbuff. */ if (pkt_len < rx_copybreak && (skb = dev_alloc_skb(pkt_len + 2)) != NULL) { - skb->dev = dev; skb_reserve(skb, 2); /* 16 byte align the IP header */ pci_dma_sync_single_for_cpu(ep->pci_dev, ep->rx_ring[entry].bufaddr, @@ -1236,7 +1234,6 @@ static int epic_rx(struct net_device *de skb = ep->rx_skbuff[entry] = dev_alloc_skb(ep->rx_buf_sz); if (skb == NULL) break; - skb->dev = dev; /* Mark as being used by this device. */ skb_reserve(skb, 2); /* Align IP on 16 byte boundaries */ ep->rx_ring[entry].bufaddr = pci_map_single(ep->pci_dev, skb->data, ep->rx_buf_sz, PCI_DMA_FROMDEVICE); diff -puN drivers/net/eth16i.c~git-net drivers/net/eth16i.c --- a/drivers/net/eth16i.c~git-net +++ a/drivers/net/eth16i.c @@ -1175,7 +1175,6 @@ static void eth16i_rx(struct net_device break; } - skb->dev = dev; skb_reserve(skb,2); /* diff -puN drivers/net/ewrk3.c~git-net drivers/net/ewrk3.c --- a/drivers/net/ewrk3.c~git-net +++ a/drivers/net/ewrk3.c @@ -993,7 +993,6 @@ static int ewrk3_rx(struct net_device *d if ((skb = dev_alloc_skb(pkt_len + 2)) != NULL) { unsigned char *p; - skb->dev = dev; skb_reserve(skb, 2); /* Align to 16 bytes */ p = skb_put(skb, pkt_len); diff -puN drivers/net/fealnx.c~git-net drivers/net/fealnx.c --- a/drivers/net/fealnx.c~git-net +++ a/drivers/net/fealnx.c @@ -1719,7 +1719,6 @@ static int netdev_rx(struct net_device * to a minimally-sized skbuff. */ if (pkt_len < rx_copybreak && (skb = dev_alloc_skb(pkt_len + 2)) != NULL) { - skb->dev = dev; skb_reserve(skb, 2); /* 16 byte align the IP header */ pci_dma_sync_single_for_cpu(np->pci_dev, np->cur_rx->buffer, diff -puN drivers/net/fec.c~git-net drivers/net/fec.c --- a/drivers/net/fec.c~git-net +++ a/drivers/net/fec.c @@ -647,7 +647,6 @@ while (!((status = bdp->cbd_sc) & BD_ENE printk("%s: Memory squeeze, dropping packet.\n", dev->name); fep->stats.rx_dropped++; } else { - skb->dev = dev; skb_put(skb,pkt_len-4); /* Make room */ eth_copy_and_sum(skb, data, pkt_len-4, 0); skb->protocol=eth_type_trans(skb,dev); diff -puN drivers/net/fec_8xx/fec_main.c~git-net drivers/net/fec_8xx/fec_main.c --- a/drivers/net/fec_8xx/fec_main.c~git-net +++ a/drivers/net/fec_8xx/fec_main.c @@ -551,7 +551,9 @@ static int fec_enet_rx_common(struct net skbn = dev_alloc_skb(pkt_len + 2); if (skbn != NULL) { skb_reserve(skbn, 2); /* align IP header */ - memcpy(skbn->data, skb->data, pkt_len); + skb_copy_from_linear_data(skb + skbn->data, + pkt_len); /* swap */ skbt = skb; skb = skbn; @@ -561,7 +563,6 @@ static int fec_enet_rx_common(struct net skbn = dev_alloc_skb(ENET_RX_FRSIZE); if (skbn != NULL) { - skb->dev = dev; skb_put(skb, pkt_len); /* Make room */ skb->protocol = eth_type_trans(skb, dev); received++; diff -puN drivers/net/forcedeth.c~git-net drivers/net/forcedeth.c --- a/drivers/net/forcedeth.c~git-net +++ a/drivers/net/forcedeth.c @@ -1385,11 +1385,12 @@ static int nv_alloc_rx(struct net_device while (np->put_rx.orig != less_rx) { struct sk_buff *skb = dev_alloc_skb(np->rx_buf_sz + NV_RX_ALLOC_PAD); if (skb) { - skb->dev = dev; np->put_rx_ctx->skb = skb; - np->put_rx_ctx->dma = pci_map_single(np->pci_dev, skb->data, - skb->end-skb->data, PCI_DMA_FROMDEVICE); - np->put_rx_ctx->dma_len = skb->end-skb->data; + np->put_rx_ctx->dma = pci_map_single(np->pci_dev, + skb->data, + skb_tailroom(skb), + PCI_DMA_FROMDEVICE); + np->put_rx_ctx->dma_len = skb_tailroom(skb); np->put_rx.orig->buf = cpu_to_le32(np->put_rx_ctx->dma); wmb(); np->put_rx.orig->flaglen = cpu_to_le32(np->rx_buf_sz | NV_RX_AVAIL); @@ -1416,11 +1417,12 @@ static int nv_alloc_rx_optimized(struct while (np->put_rx.ex != less_rx) { struct sk_buff *skb = dev_alloc_skb(np->rx_buf_sz + NV_RX_ALLOC_PAD); if (skb) { - skb->dev = dev; np->put_rx_ctx->skb = skb; - np->put_rx_ctx->dma = pci_map_single(np->pci_dev, skb->data, - skb->end-skb->data, PCI_DMA_FROMDEVICE); - np->put_rx_ctx->dma_len = skb->end-skb->data; + np->put_rx_ctx->dma = pci_map_single(np->pci_dev, + skb->data, + skb_tailroom(skb), + PCI_DMA_FROMDEVICE); + np->put_rx_ctx->dma_len = skb_tailroom(skb); np->put_rx.ex->bufhigh = cpu_to_le64(np->put_rx_ctx->dma) >> 32; np->put_rx.ex->buflow = cpu_to_le64(np->put_rx_ctx->dma) & 0x0FFFFFFFF; wmb(); @@ -1604,8 +1606,9 @@ static void nv_drain_rx(struct net_devic wmb(); if (np->rx_skb[i].skb) { pci_unmap_single(np->pci_dev, np->rx_skb[i].dma, - np->rx_skb[i].skb->end-np->rx_skb[i].skb->data, - PCI_DMA_FROMDEVICE); + (skb_end_pointer(np->rx_skb[i].skb) - + np->rx_skb[i].skb->data), + PCI_DMA_FROMDEVICE); dev_kfree_skb(np->rx_skb[i].skb); np->rx_skb[i].skb = NULL; } @@ -4384,11 +4387,12 @@ static int nv_loopback_test(struct net_d ret = 0; goto out; } + test_dma_addr = pci_map_single(np->pci_dev, tx_skb->data, + skb_tailroom(tx_skb), + PCI_DMA_FROMDEVICE); pkt_data = skb_put(tx_skb, pkt_len); for (i = 0; i < pkt_len; i++) pkt_data[i] = (u8)(i & 0xff); - test_dma_addr = pci_map_single(np->pci_dev, tx_skb->data, - tx_skb->end-tx_skb->data, PCI_DMA_FROMDEVICE); if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) { np->tx_ring.orig[0].buf = cpu_to_le32(test_dma_addr); @@ -4445,7 +4449,7 @@ static int nv_loopback_test(struct net_d } pci_unmap_page(np->pci_dev, test_dma_addr, - tx_skb->end-tx_skb->data, + (skb_end_pointer(tx_skb) - tx_skb->data), PCI_DMA_TODEVICE); dev_kfree_skb_any(tx_skb); out: diff -puN drivers/net/fs_enet/fs_enet-main.c~git-net drivers/net/fs_enet/fs_enet-main.c --- a/drivers/net/fs_enet/fs_enet-main.c~git-net +++ a/drivers/net/fs_enet/fs_enet-main.c @@ -160,7 +160,8 @@ static int fs_enet_rx_napi(struct net_de skbn = dev_alloc_skb(pkt_len + 2); if (skbn != NULL) { skb_reserve(skbn, 2); /* align IP header */ - memcpy(skbn->data, skb->data, pkt_len); + skb_copy_from_linear_data(skb, + skbn->data, pkt_len); /* swap */ skbt = skb; skb = skbn; @@ -170,7 +171,6 @@ static int fs_enet_rx_napi(struct net_de skbn = dev_alloc_skb(ENET_RX_FRSIZE); if (skbn != NULL) { - skb->dev = dev; skb_put(skb, pkt_len); /* Make room */ skb->protocol = eth_type_trans(skb, dev); received++; @@ -294,7 +294,8 @@ static int fs_enet_rx_non_napi(struct ne skbn = dev_alloc_skb(pkt_len + 2); if (skbn != NULL) { skb_reserve(skbn, 2); /* align IP header */ - memcpy(skbn->data, skb->data, pkt_len); + skb_copy_from_linear_data(skb, + skbn->data, pkt_len); /* swap */ skbt = skb; skb = skbn; @@ -304,7 +305,6 @@ static int fs_enet_rx_non_napi(struct ne skbn = dev_alloc_skb(ENET_RX_FRSIZE); if (skbn != NULL) { - skb->dev = dev; skb_put(skb, pkt_len); /* Make room */ skb->protocol = eth_type_trans(skb, dev); received++; @@ -516,7 +516,6 @@ void fs_init_bds(struct net_device *dev) break; } fep->rx_skbuff[i] = skb; - skb->dev = dev; CBDW_BUFADDR(bdp, dma_map_single(fep->dev, skb->data, L1_CACHE_ALIGN(PKT_MAXBUF_SIZE), diff -puN drivers/net/gianfar.c~git-net drivers/net/gianfar.c --- a/drivers/net/gianfar.c~git-net +++ a/drivers/net/gianfar.c @@ -942,18 +942,18 @@ static inline void gfar_tx_checksum(stru /* Tell the controller what the protocol is */ /* And provide the already calculated phcs */ - if (skb->nh.iph->protocol == IPPROTO_UDP) { + if (ip_hdr(skb)->protocol == IPPROTO_UDP) { flags |= TXFCB_UDP; - fcb->phcs = skb->h.uh->check; + fcb->phcs = udp_hdr(skb)->check; } else - fcb->phcs = skb->h.th->check; + fcb->phcs = udp_hdr(skb)->check; /* l3os is the distance between the start of the * frame (skb->data) and the start of the IP hdr. * l4os is the distance between the start of the * l3 hdr and the l4 hdr */ - fcb->l3os = (u16)(skb->nh.raw - skb->data - GMAC_FCB_LEN); - fcb->l4os = (u16)(skb->h.raw - skb->nh.raw); + fcb->l3os = (u16)(skb_network_offset(skb) - GMAC_FCB_LEN); + fcb->l4os = skb_network_header_len(skb); fcb->flags = flags; } @@ -1295,8 +1295,6 @@ struct sk_buff * gfar_new_skb(struct net */ skb_reserve(skb, alignamount); - skb->dev = dev; - bdp->bufPtr = dma_map_single(NULL, skb->data, priv->rx_buffer_size, DMA_FROM_DEVICE); diff -puN drivers/net/hamachi.c~git-net drivers/net/hamachi.c --- a/drivers/net/hamachi.c~git-net +++ a/drivers/net/hamachi.c @@ -1568,7 +1568,6 @@ static int hamachi_rx(struct net_device printk(KERN_ERR "%s: rx_copybreak non-zero " "not good with RX_CHECKSUM\n", dev->name); #endif - skb->dev = dev; skb_reserve(skb, 2); /* 16 byte align the IP header */ pci_dma_sync_single_for_cpu(hmp->pci_dev, hmp->rx_ring[entry].addr, diff -puN drivers/net/hamradio/bpqether.c~git-net drivers/net/hamradio/bpqether.c --- a/drivers/net/hamradio/bpqether.c~git-net +++ a/drivers/net/hamradio/bpqether.c @@ -282,7 +282,7 @@ static int bpq_xmit(struct sk_buff *skb, } skb->protocol = ax25_type_trans(skb, dev); - skb->nh.raw = skb->data; + skb_reset_network_header(skb); dev->hard_header(skb, dev, ETH_P_BPQ, bpq->dest_addr, NULL, 0); bpq->stats.tx_packets++; bpq->stats.tx_bytes+=skb->len; diff -puN drivers/net/hamradio/dmascc.c~git-net drivers/net/hamradio/dmascc.c --- a/drivers/net/hamradio/dmascc.c~git-net +++ a/drivers/net/hamradio/dmascc.c @@ -930,7 +930,7 @@ static int scc_send_packet(struct sk_buf /* Transfer data to DMA buffer */ i = priv->tx_head; - memcpy(priv->tx_buf[i], skb->data + 1, skb->len - 1); + skb_copy_from_linear_data_offset(skb, 1, priv->tx_buf[i], skb->len - 1); priv->tx_len[i] = skb->len - 1; /* Clear interrupts while we touch our circular buffers */ diff -puN drivers/net/hamradio/hdlcdrv.c~git-net drivers/net/hamradio/hdlcdrv.c --- a/drivers/net/hamradio/hdlcdrv.c~git-net +++ a/drivers/net/hamradio/hdlcdrv.c @@ -317,7 +317,9 @@ void hdlcdrv_transmitter(struct net_devi dev_kfree_skb_irq(skb); break; } - memcpy(s->hdlctx.buffer, skb->data+1, pkt_len); + skb_copy_from_linear_data_offset(skb, 1, + s->hdlctx.buffer, + pkt_len); dev_kfree_skb_irq(skb); s->hdlctx.bp = s->hdlctx.buffer; append_crc_ccitt(s->hdlctx.buffer, pkt_len); diff -puN drivers/net/hamradio/yam.c~git-net drivers/net/hamradio/yam.c --- a/drivers/net/hamradio/yam.c~git-net +++ a/drivers/net/hamradio/yam.c @@ -638,7 +638,9 @@ static void yam_tx_byte(struct net_devic dev_kfree_skb_any(skb); break; } - memcpy(yp->tx_buf, skb->data + 1, yp->tx_len); + skb_copy_from_linear_data_offset(skb->data, 1, + yp->tx_buf, + yp->tx_len); dev_kfree_skb_any(skb); yp->tx_count = 0; yp->tx_crcl = 0x21; diff -puN drivers/net/hp100.c~git-net drivers/net/hp100.c --- a/drivers/net/hp100.c~git-net +++ a/drivers/net/hp100.c @@ -1816,7 +1816,6 @@ static void hp100_rx(struct net_device * u_char *ptr; skb_reserve(skb,2); - skb->dev = dev; /* ptr to start of the sk_buff data area */ skb_put(skb, pkt_len); diff -puN drivers/net/ibm_emac/ibm_emac_core.c~git-net drivers/net/ibm_emac/ibm_emac_core.c --- a/drivers/net/ibm_emac/ibm_emac_core.c~git-net +++ a/drivers/net/ibm_emac/ibm_emac_core.c @@ -1338,7 +1338,7 @@ static inline int emac_rx_sg_append(stru dev_kfree_skb(dev->rx_sg_skb); dev->rx_sg_skb = NULL; } else { - cacheable_memcpy(dev->rx_sg_skb->tail, + cacheable_memcpy(skb_tail_pointer(dev->rx_sg_skb), dev->rx_skb[slot]->data, len); skb_put(dev->rx_sg_skb, len); emac_recycle_rx_skb(dev, slot, len); @@ -1398,7 +1398,6 @@ static int emac_poll_rx(void *param, int skb_put(skb, len); push_packet: - skb->dev = dev->ndev; skb->protocol = eth_type_trans(skb, dev->ndev); emac_rx_csum(dev, skb, ctrl); diff -puN drivers/net/ibmlana.c~git-net drivers/net/ibmlana.c --- a/drivers/net/ibmlana.c~git-net +++ a/drivers/net/ibmlana.c @@ -601,7 +601,6 @@ static void irqrx_handler(struct net_dev /* set up skb fields */ - skb->dev = dev; skb->protocol = eth_type_trans(skb, dev); skb->ip_summed = CHECKSUM_NONE; diff -puN drivers/net/ibmveth.c~git-net drivers/net/ibmveth.c --- a/drivers/net/ibmveth.c~git-net +++ a/drivers/net/ibmveth.c @@ -798,7 +798,6 @@ static int ibmveth_poll(struct net_devic skb_reserve(skb, offset); skb_put(skb, length); - skb->dev = netdev; skb->protocol = eth_type_trans(skb, netdev); netif_receive_skb(skb); /* send it up */ diff -puN drivers/net/ioc3-eth.c~git-net drivers/net/ioc3-eth.c --- a/drivers/net/ioc3-eth.c~git-net +++ a/drivers/net/ioc3-eth.c @@ -633,8 +633,6 @@ static inline void ioc3_rx(struct ioc3_p ip->rx_skbs[rx_entry] = NULL; /* Poison */ - new_skb->dev = priv_netdev(ip); - /* Because we reserve afterwards. */ skb_put(new_skb, (1664 + RX_OFFSET)); rxb = (struct ioc3_erxbuf *) new_skb->data; @@ -940,7 +938,6 @@ static void ioc3_alloc_rings(struct net_ } ip->rx_skbs[i] = skb; - skb->dev = dev; /* Because we reserve afterwards. */ skb_put(skb, (1664 + RX_OFFSET)); @@ -1396,9 +1393,9 @@ static int ioc3_start_xmit(struct sk_buf * manually. */ if (skb->ip_summed == CHECKSUM_PARTIAL) { - int proto = ntohs(skb->nh.iph->protocol); + const struct iphdr *ih = ip_hdr(skb); + const int proto = ntohs(ih->protocol); unsigned int csoff; - struct iphdr *ih = skb->nh.iph; uint32_t csum, ehsum; uint16_t *eh; @@ -1425,11 +1422,11 @@ static int ioc3_start_xmit(struct sk_buf csoff = ETH_HLEN + (ih->ihl << 2); if (proto == IPPROTO_UDP) { csoff += offsetof(struct udphdr, check); - skb->h.uh->check = csum; + udp_hdr(skb)->check = csum; } if (proto == IPPROTO_TCP) { csoff += offsetof(struct tcphdr, check); - skb->h.th->check = csum; + tcp_hdr(skb)->check = csum; } w0 = ETXD_DOCHECKSUM | (csoff << ETXD_CHKOFF_SHIFT); @@ -1446,7 +1443,7 @@ static int ioc3_start_xmit(struct sk_buf if (len <= 104) { /* Short packet, let's copy it directly into the ring. */ - memcpy(desc->data, skb->data, skb->len); + skb_copy_from_linear_data(skb, desc->data, skb->len); if (len < ETH_ZLEN) { /* Very short packet, pad with zeros at the end. */ memset(desc->data + len, 0, ETH_ZLEN - len); diff -puN drivers/net/irda/ali-ircc.c~git-net drivers/net/irda/ali-ircc.c --- a/drivers/net/irda/ali-ircc.c~git-net +++ a/drivers/net/irda/ali-ircc.c @@ -1472,9 +1472,8 @@ static int ali_ircc_fir_hard_xmit(struct self->stats.tx_bytes += skb->len; - memcpy(self->tx_fifo.queue[self->tx_fifo.free].start, skb->data, - skb->len); - + skb_copy_from_linear_data(skb, self->tx_fifo.queue[self->tx_fifo.free].start, + skb->len); self->tx_fifo.len++; self->tx_fifo.free++; @@ -1924,7 +1923,7 @@ static int ali_ircc_dma_receive_complet /* Copy frame without CRC, CRC is removed by hardware*/ skb_put(skb, len); - memcpy(skb->data, self->rx_buff.data, len); + skb_copy_to_linear_data(skb, self->rx_buff.data, len); /* Move to next frame */ self->rx_buff.data += len; @@ -1932,7 +1931,7 @@ static int ali_ircc_dma_receive_complet self->stats.rx_packets++; skb->dev = self->netdev; - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); skb->protocol = htons(ETH_P_IRDA); netif_rx(skb); self->netdev->last_rx = jiffies; diff -puN drivers/net/irda/au1k_ir.c~git-net drivers/net/irda/au1k_ir.c --- a/drivers/net/irda/au1k_ir.c~git-net +++ a/drivers/net/irda/au1k_ir.c @@ -526,7 +526,7 @@ static int au1k_irda_hard_xmit(struct sk if (aup->speed == 4000000) { /* FIR */ - memcpy((void *)pDB->vaddr, skb->data, skb->len); + skb_copy_from_linear_data(skb, pDB->vaddr, skb->len); ptxd->count_0 = skb->len & 0xff; ptxd->count_1 = (skb->len >> 8) & 0xff; @@ -604,9 +604,9 @@ static int au1k_irda_rx(struct net_devic skb_put(skb, count); else skb_put(skb, count-2); - memcpy(skb->data, (void *)pDB->vaddr, count-2); + skb_copy_to_linear_data(skb, pDB->vaddr, count - 2); skb->dev = dev; - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); skb->protocol = htons(ETH_P_IRDA); netif_rx(skb); prxd->count_0 = 0; diff -puN drivers/net/irda/donauboe.c~git-net drivers/net/irda/donauboe.c --- a/drivers/net/irda/donauboe.c~git-net +++ a/drivers/net/irda/donauboe.c @@ -1119,7 +1119,7 @@ dumpbufs(skb->data,skb->len,'>'); else { len = skb->len; - memcpy (self->tx_bufs[self->txs], skb->data, len); + skb_copy_from_linear_data(skb, self->tx_bufs[self->txs], len); } self->ring->tx[self->txs].len = len & 0x0fff; @@ -1282,11 +1282,11 @@ dumpbufs(self->rx_bufs[self->rxs],len,'< skb_reserve (skb, 1); skb_put (skb, len); - memcpy (skb->data, self->rx_bufs[self->rxs], len); - + skb_copy_to_linear_data(skb, self->rx_bufs[self->rxs], + len); self->stats.rx_packets++; skb->dev = self->netdev; - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); skb->protocol = htons (ETH_P_IRDA); } else diff -puN drivers/net/irda/irda-usb.c~git-net drivers/net/irda/irda-usb.c --- a/drivers/net/irda/irda-usb.c~git-net +++ a/drivers/net/irda/irda-usb.c @@ -441,7 +441,7 @@ static int irda_usb_hard_xmit(struct sk_ goto drop; } - memcpy(self->tx_buff + self->header_length, skb->data, skb->len); + skb_copy_from_linear_data(skb, self->tx_buff + self->header_length, skb->len); /* Change setting for next frame */ if (self->capability & IUC_STIR421X) { @@ -902,7 +902,7 @@ static void irda_usb_receive(struct urb if(docopy) { /* Copy packet, so we can recycle the original */ - memcpy(newskb->data, skb->data, urb->actual_length); + skb_copy_from_linear_data(skb, newskb->data, urb->actual_length); /* Deliver this new skb */ dataskb = newskb; /* And hook the old skb to the URB @@ -921,7 +921,7 @@ static void irda_usb_receive(struct urb /* Ask the networking layer to queue the packet for the IrDA stack */ dataskb->dev = self->netdev; - dataskb->mac.raw = dataskb->data; + skb_reset_mac_header(dataskb); dataskb->protocol = htons(ETH_P_IRDA); len = dataskb->len; netif_rx(dataskb); diff -puN drivers/net/irda/mcs7780.c~git-net drivers/net/irda/mcs7780.c --- a/drivers/net/irda/mcs7780.c~git-net +++ a/drivers/net/irda/mcs7780.c @@ -200,14 +200,14 @@ static inline int mcs_setup_transceiver_ /* Setup a communication between mcs7780 and agilent chip. */ static inline int mcs_setup_transceiver_agilent(struct mcs_cb *mcs) { - IRDA_WARNING("This transceiver type is not supported yet."); + IRDA_WARNING("This transceiver type is not supported yet.\n"); return 1; } /* Setup a communication between mcs7780 and sharp chip. */ static inline int mcs_setup_transceiver_sharp(struct mcs_cb *mcs) { - IRDA_WARNING("This transceiver type is not supported yet."); + IRDA_WARNING("This transceiver type is not supported yet.\n"); return 1; } @@ -279,7 +279,7 @@ static inline int mcs_setup_transceiver( break; default: - IRDA_WARNING("Unknown transceiver type: %d", + IRDA_WARNING("Unknown transceiver type: %d\n", mcs->transceiver_type); ret = 1; } @@ -318,7 +318,7 @@ static inline int mcs_setup_transceiver( return ret; error: - IRDA_ERROR("%s", msg); + IRDA_ERROR("%s\n", msg); return ret; } @@ -353,7 +353,7 @@ static unsigned mcs_wrap_fir_skb(const s buf[0] = len & 0xff; buf[1] = (len >> 8) & 0xff; /* copy the data into the tx buffer. */ - memcpy(buf+2, skb->data, skb->len); + skb_copy_from_linear_data(skb, buf + 2, skb->len); /* put the fcs in the last four bytes in little endian order. */ buf[len - 4] = fcs & 0xff; buf[len - 3] = (fcs >> 8) & 0xff; @@ -377,7 +377,7 @@ static unsigned mcs_wrap_mir_skb(const s buf[0] = len & 0xff; buf[1] = (len >> 8) & 0xff; /* copy the data */ - memcpy(buf+2, skb->data, skb->len); + skb_copy_from_linear_data(skb, buf + 2, skb->len); /* put the fcs in last two bytes in little endian order. */ buf[len - 2] = fcs & 0xff; buf[len - 1] = (fcs >> 8) & 0xff; @@ -426,9 +426,9 @@ static void mcs_unwrap_mir(struct mcs_cb } skb_reserve(skb, 1); - memcpy(skb->data, buf, new_len); + skb_copy_to_linear_data(skb, buf, new_len); skb_put(skb, new_len); - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); skb->protocol = htons(ETH_P_IRDA); skb->dev = mcs->netdev; @@ -479,9 +479,9 @@ static void mcs_unwrap_fir(struct mcs_cb } skb_reserve(skb, 1); - memcpy(skb->data, buf, new_len); + skb_copy_to_linear_data(skb, buf, new_len); skb_put(skb, new_len); - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); skb->protocol = htons(ETH_P_IRDA); skb->dev = mcs->netdev; @@ -587,7 +587,7 @@ static int mcs_speed_change(struct mcs_c } while(cnt++ < 100 && (rval & MCS_IRINTX)); if(cnt >= 100) { - IRDA_ERROR("unable to change speed"); + IRDA_ERROR("unable to change speed\n"); ret = -EIO; goto error; } @@ -638,7 +638,7 @@ static int mcs_speed_change(struct mcs_c default: ret = 1; - IRDA_WARNING("Unknown transceiver type: %d", + IRDA_WARNING("Unknown transceiver type: %d\n", mcs->transceiver_type); } if (unlikely(ret)) @@ -733,7 +733,7 @@ static int mcs_net_open(struct net_devic sprintf(hwname, "usb#%d", mcs->usbdev->devnum); mcs->irlap = irlap_open(netdev, &mcs->qos, hwname); if (!mcs->irlap) { - IRDA_ERROR("mcs7780: irlap_open failed"); + IRDA_ERROR("mcs7780: irlap_open failed\n"); goto error2; } @@ -862,7 +862,7 @@ static int mcs_hard_xmit(struct sk_buff mcs->out_buf, wraplen, mcs_send_irq, mcs); if ((ret = usb_submit_urb(mcs->tx_urb, GFP_ATOMIC))) { - IRDA_ERROR("failed tx_urb: %d", ret); + IRDA_ERROR("failed tx_urb: %d\n", ret); switch (ret) { case -ENODEV: case -EPIPE: @@ -897,7 +897,7 @@ static int mcs_probe(struct usb_interfac if (!ndev) goto error1; - IRDA_DEBUG(1, "MCS7780 USB-IrDA bridge found at %d.", udev->devnum); + IRDA_DEBUG(1, "MCS7780 USB-IrDA bridge found at %d.\n", udev->devnum); /* what is it realy for? */ SET_MODULE_OWNER(ndev); @@ -905,7 +905,7 @@ static int mcs_probe(struct usb_interfac ret = usb_reset_configuration(udev); if (ret != 0) { - IRDA_ERROR("mcs7780: usb reset configuration failed"); + IRDA_ERROR("mcs7780: usb reset configuration failed\n"); goto error2; } @@ -950,7 +950,7 @@ static int mcs_probe(struct usb_interfac if (ret != 0) goto error2; - IRDA_DEBUG(1, "IrDA: Registered MosChip MCS7780 device as %s", + IRDA_DEBUG(1, "IrDA: Registered MosChip MCS7780 device as %s\n", ndev->name); mcs->transceiver_type = transceiver_type; @@ -981,7 +981,7 @@ static void mcs_disconnect(struct usb_in free_netdev(mcs->netdev); usb_set_intfdata(intf, NULL); - IRDA_DEBUG(0, "MCS7780 now disconnected."); + IRDA_DEBUG(0, "MCS7780 now disconnected.\n"); } /* Module insertion */ @@ -992,7 +992,7 @@ static int __init mcs_init(void) /* register this driver with the USB subsystem */ result = usb_register(&mcs_driver); if (result) - IRDA_ERROR("usb_register failed. Error number %d", result); + IRDA_ERROR("usb_register failed. Error number %d\n", result); return result; } diff -puN drivers/net/irda/nsc-ircc.c~git-net drivers/net/irda/nsc-ircc.c --- a/drivers/net/irda/nsc-ircc.c~git-net +++ a/drivers/net/irda/nsc-ircc.c @@ -1466,9 +1466,8 @@ static int nsc_ircc_hard_xmit_fir(struct self->stats.tx_bytes += skb->len; - memcpy(self->tx_fifo.queue[self->tx_fifo.free].start, skb->data, - skb->len); - + skb_copy_from_linear_data(skb, self->tx_fifo.queue[self->tx_fifo.free].start, + skb->len); self->tx_fifo.len++; self->tx_fifo.free++; @@ -1869,10 +1868,14 @@ static int nsc_ircc_dma_receive_complete /* Copy frame without CRC */ if (self->io.speed < 4000000) { skb_put(skb, len-2); - memcpy(skb->data, self->rx_buff.data, len-2); + skb_copy_to_linear_data(skb, + self->rx_buff.data, + len - 2); } else { skb_put(skb, len-4); - memcpy(skb->data, self->rx_buff.data, len-4); + skb_copy_to_linear_data(skb, + self->rx_buff.data, + len - 4); } /* Move to next frame */ @@ -1881,7 +1884,7 @@ static int nsc_ircc_dma_receive_complete self->stats.rx_packets++; skb->dev = self->netdev; - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); skb->protocol = htons(ETH_P_IRDA); netif_rx(skb); self->netdev->last_rx = jiffies; diff -puN drivers/net/irda/pxaficp_ir.c~git-net drivers/net/irda/pxaficp_ir.c --- a/drivers/net/irda/pxaficp_ir.c~git-net +++ a/drivers/net/irda/pxaficp_ir.c @@ -386,12 +386,12 @@ static void pxa_irda_fir_irq_eif(struct /* Align IP header to 20 bytes */ skb_reserve(skb, 1); - memcpy(skb->data, si->dma_rx_buff, len); + skb_copy_to_linear_data(skb, si->dma_rx_buff, len); skb_put(skb, len); /* Feed it to IrLAP */ skb->dev = dev; - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); skb->protocol = htons(ETH_P_IRDA); netif_rx(skb); @@ -484,7 +484,7 @@ static int pxa_irda_hard_xmit(struct sk_ unsigned long mtt = irda_get_mtt(skb); si->dma_tx_buff_len = skb->len; - memcpy(si->dma_tx_buff, skb->data, skb->len); + skb_copy_from_linear_data(skb, si->dma_tx_buff, skb->len); if (mtt) while ((unsigned)(OSCR - si->last_oscr)/4 < mtt) diff -puN drivers/net/irda/sa1100_ir.c~git-net drivers/net/irda/sa1100_ir.c --- a/drivers/net/irda/sa1100_ir.c~git-net +++ a/drivers/net/irda/sa1100_ir.c @@ -504,7 +504,7 @@ static void sa1100_irda_fir_error(struct skb_put(skb, len); skb->dev = dev; - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); skb->protocol = htons(ETH_P_IRDA); si->stats.rx_packets++; si->stats.rx_bytes += len; diff -puN drivers/net/irda/smsc-ircc2.c~git-net drivers/net/irda/smsc-ircc2.c --- a/drivers/net/irda/smsc-ircc2.c~git-net +++ a/drivers/net/irda/smsc-ircc2.c @@ -315,6 +315,7 @@ static struct smsc_chip __initdata lpc_c { /* Base address 0x2E or 0x4E */ { "47N227", KEY55_1|FIR|SERx4, 0x5a, 0x00 }, + { "47N227", KEY55_1|FIR|SERx4, 0x7a, 0x00 }, { "47N267", KEY55_1|FIR|SERx4, 0x5e, 0x00 }, { NULL } }; @@ -1161,7 +1162,7 @@ static int smsc_ircc_hard_xmit_fir(struc self->new_speed = speed; } - memcpy(self->tx_buff.head, skb->data, skb->len); + skb_copy_from_linear_data(skb, self->tx_buff.head, skb->len); self->tx_buff.len = skb->len; self->tx_buff.data = self->tx_buff.head; @@ -1412,7 +1413,7 @@ static void smsc_ircc_dma_receive_comple self->stats.rx_bytes += len; skb->dev = self->netdev; - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); skb->protocol = htons(ETH_P_IRDA); netif_rx(skb); } diff -puN drivers/net/irda/stir4200.c~git-net drivers/net/irda/stir4200.c --- a/drivers/net/irda/stir4200.c~git-net +++ a/drivers/net/irda/stir4200.c @@ -52,7 +52,6 @@ #include #include #include -#include #include #include #include @@ -349,7 +348,7 @@ static void fir_eof(struct stir_cb *stir } skb_reserve(nskb, 1); skb = nskb; - memcpy(nskb->data, rx_buff->data, len); + skb_copy_to_linear_data(nskb, rx_buff->data, len); } else { nskb = dev_alloc_skb(rx_buff->truesize); if (unlikely(!nskb)) { @@ -364,7 +363,7 @@ static void fir_eof(struct stir_cb *stir skb_put(skb, len); - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); skb->protocol = htons(ETH_P_IRDA); skb->dev = stir->netdev; diff -puN drivers/net/irda/via-ircc.c~git-net drivers/net/irda/via-ircc.c --- a/drivers/net/irda/via-ircc.c~git-net +++ a/drivers/net/irda/via-ircc.c @@ -925,8 +925,8 @@ static int via_ircc_hard_xmit_fir(struct self->tx_fifo.tail += skb->len; self->stats.tx_bytes += skb->len; - memcpy(self->tx_fifo.queue[self->tx_fifo.free].start, skb->data, - skb->len); + skb_copy_from_linear_data(skb, + self->tx_fifo.queue[self->tx_fifo.free].start, skb->len); self->tx_fifo.len++; self->tx_fifo.free++; //F01 if (self->tx_fifo.len == 1) { @@ -1125,7 +1125,7 @@ static int via_ircc_dma_receive_complete self->stats.rx_bytes += len; self->stats.rx_packets++; skb->dev = self->netdev; - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); skb->protocol = htons(ETH_P_IRDA); netif_rx(skb); return TRUE; @@ -1189,7 +1189,7 @@ F01_E */ skb_reserve(skb, 1); skb_put(skb, len - 4); - memcpy(skb->data, self->rx_buff.data, len - 4); + skb_copy_to_linear_data(skb, self->rx_buff.data, len - 4); IRDA_DEBUG(2, "%s(): len=%x.rx_buff=%p\n", __FUNCTION__, len - 4, self->rx_buff.data); @@ -1198,7 +1198,7 @@ F01_E */ self->stats.rx_bytes += len; self->stats.rx_packets++; skb->dev = self->netdev; - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); skb->protocol = htons(ETH_P_IRDA); netif_rx(skb); @@ -1234,7 +1234,7 @@ static int upload_rxdata(struct via_ircc } skb_reserve(skb, 1); skb_put(skb, len - 4 + 1); - memcpy(skb->data, self->rx_buff.data, len - 4 + 1); + skb_copy_to_linear_data(skb, self->rx_buff.data, len - 4 + 1); st_fifo->tail++; st_fifo->len++; if (st_fifo->tail > MAX_RX_WINDOW) @@ -1244,7 +1244,7 @@ static int upload_rxdata(struct via_ircc self->stats.rx_bytes += len; self->stats.rx_packets++; skb->dev = self->netdev; - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); skb->protocol = htons(ETH_P_IRDA); netif_rx(skb); if (st_fifo->len < (MAX_RX_WINDOW + 2)) { @@ -1303,7 +1303,7 @@ static int RxTimerHandler(struct via_irc } skb_reserve(skb, 1); skb_put(skb, len - 4); - memcpy(skb->data, self->rx_buff.data, len - 4); + skb_copy_to_linear_data(skb, self->rx_buff.data, len - 4); IRDA_DEBUG(2, "%s(): len=%x.head=%x\n", __FUNCTION__, len - 4, st_fifo->head); @@ -1313,7 +1313,7 @@ static int RxTimerHandler(struct via_irc self->stats.rx_bytes += len; self->stats.rx_packets++; skb->dev = self->netdev; - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); skb->protocol = htons(ETH_P_IRDA); netif_rx(skb); } //while diff -puN drivers/net/irda/vlsi_ir.c~git-net drivers/net/irda/vlsi_ir.c --- a/drivers/net/irda/vlsi_ir.c~git-net +++ a/drivers/net/irda/vlsi_ir.c @@ -595,7 +595,7 @@ static int vlsi_process_rx(struct vlsi_r rd->skb = NULL; skb->dev = ndev; memcpy(skb_put(skb,len), rd->buf, len); - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); if (in_interrupt()) netif_rx(skb); else @@ -993,7 +993,7 @@ static int vlsi_hard_start_xmit(struct s goto drop; } else - memcpy(rd->buf, skb->data, len); + skb_copy_from_linear_data(skb, rd->buf, len); } rd->skb = skb; /* remember skb for tx-complete stats */ diff -puN drivers/net/irda/w83977af_ir.c~git-net drivers/net/irda/w83977af_ir.c --- a/drivers/net/irda/w83977af_ir.c~git-net +++ a/drivers/net/irda/w83977af_ir.c @@ -529,7 +529,7 @@ int w83977af_hard_xmit(struct sk_buff *s /* Decide if we should use PIO or DMA transfer */ if (self->io.speed > PIO_MAX_SPEED) { self->tx_buff.data = self->tx_buff.head; - memcpy(self->tx_buff.data, skb->data, skb->len); + skb_copy_from_linear_data(skb, self->tx_buff.data, skb->len); self->tx_buff.len = skb->len; mtt = irda_get_mtt(skb); @@ -908,10 +908,14 @@ int w83977af_dma_receive_complete(struct /* Copy frame without CRC */ if (self->io.speed < 4000000) { skb_put(skb, len-2); - memcpy(skb->data, self->rx_buff.data, len-2); + skb_copy_to_linear_data(skb, + self->rx_buff.data, + len - 2); } else { skb_put(skb, len-4); - memcpy(skb->data, self->rx_buff.data, len-4); + skb_copy_to_linear_data(skb, + self->rx_buff.data, + len - 4); } /* Move to next frame */ @@ -919,7 +923,7 @@ int w83977af_dma_receive_complete(struct self->stats.rx_packets++; skb->dev = self->netdev; - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); skb->protocol = htons(ETH_P_IRDA); netif_rx(skb); self->netdev->last_rx = jiffies; diff -puN drivers/net/iseries_veth.c~git-net drivers/net/iseries_veth.c --- a/drivers/net/iseries_veth.c~git-net +++ a/drivers/net/iseries_veth.c @@ -1540,7 +1540,6 @@ static void veth_receive(struct veth_lpa } skb_put(skb, length); - skb->dev = dev; skb->protocol = eth_type_trans(skb, dev); skb->ip_summed = CHECKSUM_NONE; netif_rx(skb); /* send it up */ diff -puN drivers/net/ixgb/ixgb_main.c~git-net drivers/net/ixgb/ixgb_main.c --- a/drivers/net/ixgb/ixgb_main.c~git-net +++ a/drivers/net/ixgb/ixgb_main.c @@ -1182,24 +1182,27 @@ ixgb_tso(struct ixgb_adapter *adapter, s if (likely(skb_is_gso(skb))) { struct ixgb_buffer *buffer_info; + struct iphdr *iph; + if (skb_header_cloned(skb)) { err = pskb_expand_head(skb, 0, 0, GFP_ATOMIC); if (err) return err; } - hdr_len = ((skb->h.raw - skb->data) + (skb->h.th->doff << 2)); + hdr_len = skb_transport_offset(skb) + tcp_hdrlen(skb); mss = skb_shinfo(skb)->gso_size; - skb->nh.iph->tot_len = 0; - skb->nh.iph->check = 0; - skb->h.th->check = ~csum_tcpudp_magic(skb->nh.iph->saddr, - skb->nh.iph->daddr, - 0, IPPROTO_TCP, 0); - ipcss = skb->nh.raw - skb->data; - ipcso = (void *)&(skb->nh.iph->check) - (void *)skb->data; - ipcse = skb->h.raw - skb->data - 1; - tucss = skb->h.raw - skb->data; - tucso = (void *)&(skb->h.th->check) - (void *)skb->data; + iph = ip_hdr(skb); + iph->tot_len = 0; + iph->check = 0; + tcp_hdr(skb)->check = ~csum_tcpudp_magic(iph->saddr, + iph->daddr, 0, + IPPROTO_TCP, 0); + ipcss = skb_network_offset(skb); + ipcso = (void *)&(iph->check) - (void *)skb->data; + ipcse = skb_transport_offset(skb) - 1; + tucss = skb_transport_offset(skb); + tucso = (void *)&(tcp_hdr(skb)->check) - (void *)skb->data; tucse = 0; i = adapter->tx_ring.next_to_use; @@ -1243,7 +1246,7 @@ ixgb_tx_csum(struct ixgb_adapter *adapte if(likely(skb->ip_summed == CHECKSUM_PARTIAL)) { struct ixgb_buffer *buffer_info; - css = skb->h.raw - skb->data; + css = skb_transport_offset(skb); cso = css + skb->csum_offset; i = adapter->tx_ring.next_to_use; @@ -2014,9 +2017,12 @@ ixgb_clean_rx_irq(struct ixgb_adapter *a netdev_alloc_skb(netdev, length + NET_IP_ALIGN); if (new_skb) { skb_reserve(new_skb, NET_IP_ALIGN); - memcpy(new_skb->data - NET_IP_ALIGN, - skb->data - NET_IP_ALIGN, - length + NET_IP_ALIGN); + skb_copy_to_linear_data_offset(new_skb, + -NET_IP_ALIGN, + (skb->data - + NET_IP_ALIGN), + (length + + NET_IP_ALIGN)); /* save the skb in buffer_info as good */ buffer_info->skb = skb; skb = new_skb; diff -puN drivers/net/ixp2000/ixpdev.c~git-net drivers/net/ixp2000/ixpdev.c --- a/drivers/net/ixp2000/ixpdev.c~git-net +++ a/drivers/net/ixp2000/ixpdev.c @@ -110,11 +110,10 @@ static int ixpdev_rx(struct net_device * skb = dev_alloc_skb(desc->pkt_length + 2); if (likely(skb != NULL)) { - skb->dev = nds[desc->channel]; skb_reserve(skb, 2); eth_copy_and_sum(skb, buf, desc->pkt_length, 0); skb_put(skb, desc->pkt_length); - skb->protocol = eth_type_trans(skb, skb->dev); + skb->protocol = eth_type_trans(skb, nds[desc->channel]); skb->dev->last_rx = jiffies; diff -puN drivers/net/lance.c~git-net drivers/net/lance.c --- a/drivers/net/lance.c~git-net +++ a/drivers/net/lance.c @@ -988,7 +988,7 @@ static int lance_start_xmit(struct sk_bu if (lance_debug > 5) printk("%s: bouncing a high-memory packet (%#x).\n", dev->name, (u32)isa_virt_to_bus(skb->data)); - memcpy(&lp->tx_bounce_buffs[entry], skb->data, skb->len); + skb_copy_from_linear_data(skb, &lp->tx_bounce_buffs[entry], skb->len); lp->tx_ring[entry].base = ((u32)isa_virt_to_bus((lp->tx_bounce_buffs + entry)) & 0xffffff) | 0x83000000; dev_kfree_skb(skb); @@ -1184,7 +1184,6 @@ lance_rx(struct net_device *dev) } break; } - skb->dev = dev; skb_reserve(skb,2); /* 16 byte align */ skb_put(skb,pkt_len); /* Make room */ eth_copy_and_sum(skb, diff -puN drivers/net/lasi_82596.c~git-net drivers/net/lasi_82596.c --- a/drivers/net/lasi_82596.c~git-net +++ a/drivers/net/lasi_82596.c @@ -801,7 +801,6 @@ memory_squeeze: lp->stats.rx_dropped++; } else { - skb->dev = dev; if (!rx_in_place) { /* 16 byte align the data fields */ dma_sync_single_for_cpu(lp->dev, (dma_addr_t)WSWAPchar(rbd->b_data), PKT_BUF_SZ, DMA_FROM_DEVICE); diff -puN drivers/net/lib8390.c~git-net drivers/net/lib8390.c --- a/drivers/net/lib8390.c~git-net +++ a/drivers/net/lib8390.c @@ -722,7 +722,6 @@ static void ei_receive(struct net_device else { skb_reserve(skb,2); /* IP headers on 16 byte boundaries */ - skb->dev = dev; skb_put(skb, pkt_len); /* Make room */ ei_block_input(dev, pkt_len, skb, current_offset + sizeof(rx_frame)); skb->protocol=eth_type_trans(skb,dev); diff -puN drivers/net/loopback.c~git-net drivers/net/loopback.c --- a/drivers/net/loopback.c~git-net +++ a/drivers/net/loopback.c @@ -75,8 +75,9 @@ static DEFINE_PER_CPU(struct pcpu_lstats #ifdef LOOPBACK_TSO static void emulate_large_send_offload(struct sk_buff *skb) { - struct iphdr *iph = skb->nh.iph; - struct tcphdr *th = (struct tcphdr*)(skb->nh.raw + (iph->ihl * 4)); + struct iphdr *iph = ip_hdr(skb); + struct tcphdr *th = (struct tcphdr *)(skb_network_header(skb) + + (iph->ihl * 4)); unsigned int doffset = (iph->ihl + th->doff) * 4; unsigned int mtu = skb_shinfo(skb)->gso_size + doffset; unsigned int offset = 0; @@ -90,10 +91,11 @@ static void emulate_large_send_offload(s if (!nskb) break; skb_reserve(nskb, 32); - nskb->mac.raw = nskb->data - 14; - nskb->nh.raw = nskb->data; - iph = nskb->nh.iph; - memcpy(nskb->data, skb->nh.raw, doffset); + skb_set_mac_header(nskb, -ETH_HLEN); + skb_reset_network_header(nskb); + iph = ip_hdr(nskb); + skb_copy_to_linear_data(nskb, skb_network_header(skb), + doffset); if (skb_copy_bits(skb, doffset + offset, nskb->data + doffset, @@ -108,7 +110,7 @@ static void emulate_large_send_offload(s memcpy(nskb->cb, skb->cb, sizeof(skb->cb)); nskb->pkt_type = skb->pkt_type; - th = (struct tcphdr*)(nskb->nh.raw + iph->ihl*4); + th = (struct tcphdr *)(skb_network_header(nskb) + iph->ihl * 4); iph->tot_len = htons(frag_size + doffset); iph->id = htons(id); iph->check = 0; @@ -137,7 +139,6 @@ static int loopback_xmit(struct sk_buff skb_orphan(skb); skb->protocol = eth_type_trans(skb,dev); - skb->dev = dev; #ifndef LOOPBACK_MUST_CHECKSUM skb->ip_summed = CHECKSUM_UNNECESSARY; #endif @@ -145,7 +146,7 @@ static int loopback_xmit(struct sk_buff #ifdef LOOPBACK_TSO if (skb_is_gso(skb)) { BUG_ON(skb->protocol != htons(ETH_P_IP)); - BUG_ON(skb->nh.iph->protocol != IPPROTO_TCP); + BUG_ON(ip_hdr(skb)->protocol != IPPROTO_TCP); emulate_large_send_offload(skb); return 0; @@ -163,11 +164,9 @@ static int loopback_xmit(struct sk_buff return 0; } -static struct net_device_stats loopback_stats; - static struct net_device_stats *get_stats(struct net_device *dev) { - struct net_device_stats *stats = &loopback_stats; + struct net_device_stats *stats = &dev->stats; unsigned long bytes = 0; unsigned long packets = 0; int i; @@ -207,7 +206,6 @@ static const struct ethtool_ops loopback struct net_device loopback_dev = { .name = "lo", .get_stats = &get_stats, - .priv = &loopback_stats, .mtu = (16 * 1024) + 20 + 20 + 12, .hard_start_xmit = loopback_xmit, .hard_header = eth_header, diff -puN drivers/net/lp486e.c~git-net drivers/net/lp486e.c --- a/drivers/net/lp486e.c~git-net +++ a/drivers/net/lp486e.c @@ -676,7 +676,6 @@ i596_rx_one(struct net_device *dev, stru return 1; } - skb->dev = dev; memcpy(skb_put(skb,pkt_len), rfd->data, pkt_len); skb->protocol = eth_type_trans(skb,dev); diff -puN drivers/net/mac89x0.c~git-net drivers/net/mac89x0.c --- a/drivers/net/mac89x0.c~git-net +++ a/drivers/net/mac89x0.c @@ -530,7 +530,6 @@ net_rx(struct net_device *dev) return; } skb_put(skb, length); - skb->dev = dev; memcpy_fromio(skb->data, dev->mem_start + PP_RxFrame, length); diff -puN drivers/net/macb.c~git-net drivers/net/macb.c --- a/drivers/net/macb.c~git-net +++ a/drivers/net/macb.c @@ -357,7 +357,6 @@ static int macb_rx_frame(struct macb *bp } skb_reserve(skb, RX_OFFSET); - skb->dev = bp->dev; skb->ip_summed = CHECKSUM_NONE; skb_put(skb, len); @@ -368,9 +367,10 @@ static int macb_rx_frame(struct macb *bp BUG_ON(frag != last_frag); frag_len = len - offset; } - memcpy(skb->data + offset, - bp->rx_buffers + (RX_BUFFER_SIZE * frag), - frag_len); + skb_copy_to_linear_data_offset(skb, offset, + (bp->rx_buffers + + (RX_BUFFER_SIZE * frag)), + frag_len); offset += RX_BUFFER_SIZE; bp->rx_ring[frag].addr &= ~MACB_BIT(RX_USED); wmb(); @@ -576,7 +576,8 @@ static int macb_start_xmit(struct sk_buf int i; dev_dbg(&bp->pdev->dev, "start_xmit: len %u head %p data %p tail %p end %p\n", - skb->len, skb->head, skb->data, skb->tail, skb->end); + skb->len, skb->head, skb->data, + skb_tail_pointer(skb), skb_end_pointer(skb)); dev_dbg(&bp->pdev->dev, "data:"); for (i = 0; i < 16; i++) diff -puN drivers/net/mace.c~git-net drivers/net/mace.c --- a/drivers/net/mace.c~git-net +++ a/drivers/net/mace.c @@ -939,7 +939,6 @@ static irqreturn_t mace_rxdma_intr(int i else /* Ethernet header; mace includes FCS */ nb -= 8; skb_put(skb, nb); - skb->dev = dev; skb->protocol = eth_type_trans(skb, dev); mp->stats.rx_bytes += skb->len; netif_rx(skb); diff -puN drivers/net/macmace.c~git-net drivers/net/macmace.c --- a/drivers/net/macmace.c~git-net +++ a/drivers/net/macmace.c @@ -420,8 +420,7 @@ static int mace_xmit_start(struct sk_buf mp->stats.tx_bytes += skb->len; /* We need to copy into our xmit buffer to take care of alignment and caching issues */ - - memcpy((void *) mp->tx_ring, skb->data, skb->len); + skb_copy_from_linear_data(skb, mp->tx_ring, skb->len); /* load the Tx DMA and fire it off */ @@ -621,7 +620,6 @@ static void mace_dma_rx_frame(struct net skb_reserve(skb,2); memcpy(skb_put(skb, mf->len), mf->data, mf->len); - skb->dev = dev; skb->protocol = eth_type_trans(skb, dev); netif_rx(skb); dev->last_rx = jiffies; diff -puN drivers/net/meth.c~git-net drivers/net/meth.c --- a/drivers/net/meth.c~git-net +++ a/drivers/net/meth.c @@ -421,7 +421,6 @@ static void meth_rx(struct net_device* d /* Write metadata, and then pass to the receive level */ skb_put(skb_c, len); priv->rx_skbs[priv->rx_write] = skb; - skb_c->dev = dev; skb_c->protocol = eth_type_trans(skb_c, dev); dev->last_rx = jiffies; priv->stats.rx_packets++; @@ -609,7 +608,7 @@ static void meth_tx_short_prepare(struct desc->header.raw = METH_TX_CMD_INT_EN | (len-1) | ((128-len) << 16); /* maybe I should set whole thing to 0 first... */ - memcpy(desc->data.dt + (120 - len), skb->data, skb->len); + skb_copy_from_linear_data(skb, desc->data.dt + (120 - len), skb->len); if (skb->len < len) memset(desc->data.dt + 120 - len + skb->len, 0, len-skb->len); } @@ -627,8 +626,8 @@ static void meth_tx_1page_prepare(struct /* unaligned part */ if (unaligned_len) { - memcpy(desc->data.dt + (120 - unaligned_len), - skb->data, unaligned_len); + skb_copy_from_linear_data(skb, desc->data.dt + (120 - unaligned_len), + unaligned_len); desc->header.raw |= (128 - unaligned_len) << 16; } @@ -653,8 +652,8 @@ static void meth_tx_2page_prepare(struct desc->header.raw = METH_TX_CMD_INT_EN | TX_CATBUF1 | TX_CATBUF2| (skb->len - 1); /* unaligned part */ if (unaligned_len){ - memcpy(desc->data.dt + (120 - unaligned_len), - skb->data, unaligned_len); + skb_copy_from_linear_data(skb, desc->data.dt + (120 - unaligned_len), + unaligned_len); desc->header.raw |= (128 - unaligned_len) << 16; } diff -puN drivers/net/mipsnet.c~git-net drivers/net/mipsnet.c --- a/drivers/net/mipsnet.c~git-net +++ a/drivers/net/mipsnet.c @@ -99,7 +99,6 @@ static inline ssize_t mipsnet_get_fromde if (ioiocpy_frommipsnet(dev, skb_put(skb, len), len)) return -EFAULT; - skb->dev = dev; skb->protocol = eth_type_trans(skb, dev); skb->ip_summed = CHECKSUM_UNNECESSARY; diff -puN drivers/net/mv643xx_eth.c~git-net drivers/net/mv643xx_eth.c --- a/drivers/net/mv643xx_eth.c~git-net +++ a/drivers/net/mv643xx_eth.c @@ -434,7 +434,6 @@ static int mv643xx_eth_receive_queue(str * received packet */ skb_put(skb, pkt_info.byte_cnt - 4); - skb->dev = dev; if (pkt_info.cmd_sts & ETH_LAYER_4_CHECKSUM_OK) { skb->ip_summed = CHECKSUM_UNNECESSARY; @@ -1162,15 +1161,15 @@ static void eth_tx_submit_descs_for_skb( cmd_sts |= ETH_GEN_TCP_UDP_CHECKSUM | ETH_GEN_IP_V_4_CHECKSUM | - skb->nh.iph->ihl << ETH_TX_IHL_SHIFT; + ip_hdr(skb)->ihl << ETH_TX_IHL_SHIFT; - switch (skb->nh.iph->protocol) { + switch (ip_hdr(skb)->protocol) { case IPPROTO_UDP: cmd_sts |= ETH_UDP_FRAME; - desc->l4i_chk = skb->h.uh->check; + desc->l4i_chk = udp_hdr(skb)->check; break; case IPPROTO_TCP: - desc->l4i_chk = skb->h.th->check; + desc->l4i_chk = tcp_hdr(skb)->check; break; default: BUG(); diff -puN drivers/net/myri10ge/myri10ge.c~git-net drivers/net/myri10ge/myri10ge.c --- a/drivers/net/myri10ge/myri10ge.c~git-net +++ a/drivers/net/myri10ge/myri10ge.c @@ -879,7 +879,7 @@ myri10ge_rx_skb_build(struct sk_buff *sk * skb_pull() (for ether_pad and eth_type_trans()) requires * the beginning of the packet in skb_headlen(), move it * manually */ - memcpy(skb->data, va, hlen); + skb_copy_to_linear_data(skb, va, hlen); skb_shinfo(skb)->frags[0].page_offset += hlen; skb_shinfo(skb)->frags[0].size -= hlen; skb->data_len -= hlen; @@ -1020,7 +1020,6 @@ myri10ge_rx_done(struct myri10ge_priv *m skb_shinfo(skb)->nr_frags = 0; } skb->protocol = eth_type_trans(skb, dev); - skb->dev = dev; if (mgp->csum_flag) { if ((skb->protocol == htons(ETH_P_IP)) || @@ -2030,7 +2029,7 @@ again: odd_flag = 0; flags = (MXGEFW_FLAGS_NO_TSO | MXGEFW_FLAGS_FIRST); if (likely(skb->ip_summed == CHECKSUM_PARTIAL)) { - cksum_offset = (skb->h.raw - skb->data); + cksum_offset = skb_transport_offset(skb); pseudo_hdr_offset = cksum_offset + skb->csum_offset; /* If the headers are excessively large, then we must * fall back to a software checksum */ @@ -2055,7 +2054,7 @@ again: * send loop that we are still in the * header portion of the TSO packet. * TSO header must be at most 134 bytes long */ - cum_len = -((skb->h.raw - skb->data) + (skb->h.th->doff << 2)); + cum_len = -(skb_transport_offset(skb) + tcp_hdrlen(skb)); /* for TSO, pseudo_hdr_offset holds mss. * The firmware figures out where to put diff -puN drivers/net/myri_sbus.c~git-net drivers/net/myri_sbus.c --- a/drivers/net/myri_sbus.c~git-net +++ a/drivers/net/myri_sbus.c @@ -368,7 +368,7 @@ static __be16 myri_type_trans(struct sk_ struct ethhdr *eth; unsigned char *rawp; - skb->mac.raw = (((unsigned char *)skb->data) + MYRI_PAD_LEN); + skb_set_mac_header(skb, MYRI_PAD_LEN); skb_pull(skb, dev->hard_header_len); eth = eth_hdr(skb); @@ -502,7 +502,7 @@ static void myri_rx(struct myri_eth *mp, copy_skb->dev = dev; DRX(("resv_and_put ")); skb_put(copy_skb, len); - memcpy(copy_skb->data, skb->data, len); + skb_copy_from_linear_data(skb, copy_skb->data, len); /* Reuse original ring buffer. */ DRX(("reuse ")); diff -puN drivers/net/natsemi.c~git-net drivers/net/natsemi.c --- a/drivers/net/natsemi.c~git-net +++ a/drivers/net/natsemi.c @@ -2289,7 +2289,6 @@ static void netdev_rx(struct net_device * without copying to a minimally-sized skbuff. */ if (pkt_len < rx_copybreak && (skb = dev_alloc_skb(pkt_len + RX_OFFSET)) != NULL) { - skb->dev = dev; /* 16 byte align the IP header */ skb_reserve(skb, RX_OFFSET); pci_dma_sync_single_for_cpu(np->pci_dev, diff -puN drivers/net/netx-eth.c~git-net drivers/net/netx-eth.c --- a/drivers/net/netx-eth.c~git-net +++ a/drivers/net/netx-eth.c @@ -168,7 +168,6 @@ static void netx_eth_receive(struct net_ FIFO_PTR_SEGMENT(seg) | FIFO_PTR_FRAMENO(frameno)); ndev->last_rx = jiffies; - skb->dev = ndev; skb->protocol = eth_type_trans(skb, ndev); netif_rx(skb); priv->stats.rx_packets++; diff -puN drivers/net/netxen/netxen_nic_hw.c~git-net drivers/net/netxen/netxen_nic_hw.c --- a/drivers/net/netxen/netxen_nic_hw.c~git-net +++ a/drivers/net/netxen/netxen_nic_hw.c @@ -35,6 +35,8 @@ #include "netxen_nic_hw.h" #include "netxen_nic_phan_reg.h" +#include + /* PCI Windowing for DDR regions. */ #define ADDR_IN_RANGE(addr, low, high) \ @@ -371,22 +373,21 @@ void netxen_tso_check(struct netxen_adap struct cmd_desc_type0 *desc, struct sk_buff *skb) { if (desc->mss) { - desc->total_hdr_length = sizeof(struct ethhdr) + - ((skb->nh.iph)->ihl * sizeof(u32)) + - ((skb->h.th)->doff * sizeof(u32)); + desc->total_hdr_length = (sizeof(struct ethhdr) + + ip_hdrlen(skb) + tcp_hdrlen(skb)); netxen_set_cmd_desc_opcode(desc, TX_TCP_LSO); } else if (skb->ip_summed == CHECKSUM_PARTIAL) { - if (skb->nh.iph->protocol == IPPROTO_TCP) { + if (ip_hdr(skb)->protocol == IPPROTO_TCP) { netxen_set_cmd_desc_opcode(desc, TX_TCP_PKT); - } else if (skb->nh.iph->protocol == IPPROTO_UDP) { + } else if (ip_hdr(skb)->protocol == IPPROTO_UDP) { netxen_set_cmd_desc_opcode(desc, TX_UDP_PKT); } else { return; } } adapter->stats.xmitcsummed++; - desc->tcp_hdr_offset = skb->h.raw - skb->data; - desc->ip_hdr_offset = skb->nh.raw - skb->data; + desc->tcp_hdr_offset = skb_transport_offset(skb); + desc->ip_hdr_offset = skb_network_offset(skb); } int netxen_is_flash_supported(struct netxen_adapter *adapter) diff -puN drivers/net/netxen/netxen_nic_init.c~git-net drivers/net/netxen/netxen_nic_init.c --- a/drivers/net/netxen/netxen_nic_init.c~git-net +++ a/drivers/net/netxen/netxen_nic_init.c @@ -1129,7 +1129,6 @@ netxen_process_rcv(struct netxen_adapter port->stats.csummed++; skb->ip_summed = CHECKSUM_UNNECESSARY; } - skb->dev = netdev; if (desc_ctx == RCV_DESC_LRO_CTXID) { /* True length was only available on the last pkt */ skb_put(skb, buffer->lro_length); diff -puN drivers/net/netxen/netxen_nic_main.c~git-net drivers/net/netxen/netxen_nic_main.c --- a/drivers/net/netxen/netxen_nic_main.c~git-net +++ a/drivers/net/netxen/netxen_nic_main.c @@ -41,6 +41,7 @@ #include #include +#include MODULE_DESCRIPTION("NetXen Multi port (1/10) Gigabit Network Driver"); MODULE_LICENSE("GPL"); @@ -778,9 +779,8 @@ static int netxen_nic_xmit_frame(struct if (skb_shinfo(skb)->gso_size > 0) { no_of_desc++; - if (((skb->nh.iph)->ihl * sizeof(u32)) + - ((skb->h.th)->doff * sizeof(u32)) + - sizeof(struct ethhdr) > + if ((ip_hdrlen(skb) + tcp_hdrlen(skb) + + sizeof(struct ethhdr)) > (sizeof(struct cmd_desc_type0) - 2)) { no_of_desc++; } @@ -920,8 +920,10 @@ static int netxen_nic_xmit_frame(struct /* copy the next 64 bytes - should be enough except * for pathological case */ - memcpy((void *)hwdesc, (void *)(skb->data) + - first_hdr_len, hdr_len - first_hdr_len); + skb_copy_from_linear_data_offset(skb, first_hdr_len, + hwdesc, + (hdr_len - + first_hdr_len)); producer = get_next_index(producer, max_tx_desc_count); } } diff -puN drivers/net/ni5010.c~git-net drivers/net/ni5010.c --- a/drivers/net/ni5010.c~git-net +++ a/drivers/net/ni5010.c @@ -562,7 +562,6 @@ static void ni5010_rx(struct net_device return; } - skb->dev = dev; skb_reserve(skb, 2); /* Read packet into buffer */ diff -puN drivers/net/ni52.c~git-net drivers/net/ni52.c --- a/drivers/net/ni52.c~git-net +++ a/drivers/net/ni52.c @@ -934,7 +934,6 @@ static void ni52_rcv_int(struct net_devi skb = (struct sk_buff *) dev_alloc_skb(totlen+2); if(skb != NULL) { - skb->dev = dev; skb_reserve(skb,2); skb_put(skb,totlen); eth_copy_and_sum(skb,(char *) p->base+(unsigned long) rbd->buffer,totlen,0); @@ -1183,7 +1182,7 @@ static int ni52_send_packet(struct sk_bu else #endif { - memcpy((char *)p->xmit_cbuffs[p->xmit_count],(char *)(skb->data),skb->len); + skb_copy_from_linear_data(skb, (char *) p->xmit_cbuffs[p->xmit_count], skb->len); len = skb->len; if (len < ETH_ZLEN) { len = ETH_ZLEN; diff -puN drivers/net/ni65.c~git-net drivers/net/ni65.c --- a/drivers/net/ni65.c~git-net +++ a/drivers/net/ni65.c @@ -610,7 +610,6 @@ static void *ni65_alloc_mem(struct net_d printk(KERN_WARNING "%s: unable to allocate %s memory.\n",dev->name,what); return NULL; } - skb->dev = dev; skb_reserve(skb,2+16); skb_put(skb,R_BUF_SIZE); /* grab the whole space .. (not necessary) */ ptr = skb->data; @@ -1094,7 +1093,6 @@ static void ni65_recv_intr(struct net_de if(skb) { skb_reserve(skb,2); - skb->dev = dev; #ifdef RCV_VIA_SKB if( (unsigned long) (skb->data + R_BUF_SIZE) > 0x1000000) { skb_put(skb,len); @@ -1178,8 +1176,9 @@ static int ni65_send_packet(struct sk_bu if( (unsigned long) (skb->data + skb->len) > 0x1000000) { #endif - memcpy((char *) p->tmdbounce[p->tmdbouncenum] ,(char *)skb->data, - (skb->len > T_BUF_SIZE) ? T_BUF_SIZE : skb->len); + skb_copy_from_linear_data(skb, p->tmdbounce[p->tmdbouncenum], + skb->len > T_BUF_SIZE ? T_BUF_SIZE : + skb->len); if (len > skb->len) memset((char *)p->tmdbounce[p->tmdbouncenum]+skb->len, 0, len-skb->len); dev_kfree_skb (skb); diff -puN drivers/net/ns83820.c~git-net drivers/net/ns83820.c --- a/drivers/net/ns83820.c~git-net +++ a/drivers/net/ns83820.c @@ -611,7 +611,6 @@ static inline int rx_refill(struct net_d res &= 0xf; skb_reserve(skb, res); - skb->dev = ndev; if (gfp != GFP_ATOMIC) spin_lock_irqsave(&dev->rx_info.lock, flags); res = ns83820_add_rx_skb(dev, skb); @@ -1161,9 +1160,9 @@ again: extsts = 0; if (skb->ip_summed == CHECKSUM_PARTIAL) { extsts |= EXTSTS_IPPKT; - if (IPPROTO_TCP == skb->nh.iph->protocol) + if (IPPROTO_TCP == ip_hdr(skb)->protocol) extsts |= EXTSTS_TCPPKT; - else if (IPPROTO_UDP == skb->nh.iph->protocol) + else if (IPPROTO_UDP == ip_hdr(skb)->protocol) extsts |= EXTSTS_UDPPKT; } diff -puN drivers/net/pasemi_mac.c~git-net drivers/net/pasemi_mac.c --- a/drivers/net/pasemi_mac.c~git-net +++ a/drivers/net/pasemi_mac.c @@ -334,8 +334,6 @@ static void pasemi_mac_replenish_rx_ring break; } - skb->dev = dev; - dma = pci_map_single(mac->dma_pdev, skb->data, skb->len, PCI_DMA_FROMDEVICE); @@ -731,16 +729,18 @@ static int pasemi_mac_start_tx(struct sk dflags = XCT_MACTX_O | XCT_MACTX_ST | XCT_MACTX_SS | XCT_MACTX_CRC_PAD; if (skb->ip_summed == CHECKSUM_PARTIAL) { - switch (skb->nh.iph->protocol) { + const unsigned char *nh = skb_network_header(skb); + + switch (ip_hdr(skb)->protocol) { case IPPROTO_TCP: dflags |= XCT_MACTX_CSUM_TCP; - dflags |= XCT_MACTX_IPH((skb->h.raw - skb->nh.raw) >> 2); - dflags |= XCT_MACTX_IPO(skb->nh.raw - skb->data); + dflags |= XCT_MACTX_IPH(skb_network_header_len(skb) >> 2); + dflags |= XCT_MACTX_IPO(nh - skb->data); break; case IPPROTO_UDP: dflags |= XCT_MACTX_CSUM_UDP; - dflags |= XCT_MACTX_IPH((skb->h.raw - skb->nh.raw) >> 2); - dflags |= XCT_MACTX_IPO(skb->nh.raw - skb->data); + dflags |= XCT_MACTX_IPH(skb_network_header_len(skb) >> 2); + dflags |= XCT_MACTX_IPO(nh - skb->data); break; } } diff -puN drivers/net/pci-skeleton.c~git-net drivers/net/pci-skeleton.c --- a/drivers/net/pci-skeleton.c~git-net +++ a/drivers/net/pci-skeleton.c @@ -1344,7 +1344,7 @@ static int netdrv_start_xmit (struct sk_ tp->tx_info[entry].skb = skb; /* tp->tx_info[entry].mapping = 0; */ - memcpy (tp->tx_buf[entry], skb->data, skb->len); + skb_copy_from_linear_data(skb, tp->tx_buf[entry], skb->len); /* Note: the chip doesn't have auto-pad! */ NETDRV_W32 (TxStatus0 + (entry * sizeof(u32)), @@ -1565,7 +1565,6 @@ static void netdrv_rx_interrupt (struct skb = dev_alloc_skb (pkt_size + 2); if (skb) { - skb->dev = dev; skb_reserve (skb, 2); /* 16 byte align the IP fields. */ eth_copy_and_sum (skb, &rx_ring[ring_offset + 4], pkt_size, 0); diff -puN drivers/net/pcmcia/3c574_cs.c~git-net drivers/net/pcmcia/3c574_cs.c --- a/drivers/net/pcmcia/3c574_cs.c~git-net +++ a/drivers/net/pcmcia/3c574_cs.c @@ -1056,7 +1056,6 @@ static int el3_rx(struct net_device *dev DEBUG(3, " Receiving packet size %d status %4.4x.\n", pkt_len, rx_status); if (skb != NULL) { - skb->dev = dev; skb_reserve(skb, 2); insl(ioaddr+RX_FIFO, skb_put(skb, pkt_len), ((pkt_len+3)>>2)); diff -puN drivers/net/pcmcia/3c589_cs.c~git-net drivers/net/pcmcia/3c589_cs.c --- a/drivers/net/pcmcia/3c589_cs.c~git-net +++ a/drivers/net/pcmcia/3c589_cs.c @@ -883,7 +883,6 @@ static int el3_rx(struct net_device *dev DEBUG(3, " Receiving packet size %d status %4.4x.\n", pkt_len, rx_status); if (skb != NULL) { - skb->dev = dev; skb_reserve(skb, 2); insl(ioaddr+RX_FIFO, skb_put(skb, pkt_len), (pkt_len+3)>>2); diff -puN drivers/net/pcmcia/axnet_cs.c~git-net drivers/net/pcmcia/axnet_cs.c --- a/drivers/net/pcmcia/axnet_cs.c~git-net +++ a/drivers/net/pcmcia/axnet_cs.c @@ -1136,7 +1136,7 @@ static int ei_start_xmit(struct sk_buff ei_block_output(dev, length, skb->data, output_page); else { memset(packet, 0, ETH_ZLEN); - memcpy(packet, skb->data, skb->len); + skb_copy_from_linear_data(skb, packet, skb->len); ei_block_output(dev, length, packet, output_page); } @@ -1496,7 +1496,6 @@ static void ei_receive(struct net_device else { skb_reserve(skb,2); /* IP headers on 16 byte boundaries */ - skb->dev = dev; skb_put(skb, pkt_len); /* Make room */ ei_block_input(dev, pkt_len, skb, current_offset + sizeof(rx_frame)); skb->protocol=eth_type_trans(skb,dev); diff -puN drivers/net/pcmcia/fmvj18x_cs.c~git-net drivers/net/pcmcia/fmvj18x_cs.c --- a/drivers/net/pcmcia/fmvj18x_cs.c~git-net +++ a/drivers/net/pcmcia/fmvj18x_cs.c @@ -999,7 +999,6 @@ static void fjn_rx(struct net_device *de lp->stats.rx_dropped++; break; } - skb->dev = dev; skb_reserve(skb, 2); insw(ioaddr + DATAPORT, skb_put(skb, pkt_len), diff -puN drivers/net/pcmcia/nmclan_cs.c~git-net drivers/net/pcmcia/nmclan_cs.c --- a/drivers/net/pcmcia/nmclan_cs.c~git-net +++ a/drivers/net/pcmcia/nmclan_cs.c @@ -1182,12 +1182,10 @@ static int mace_rx(struct net_device *de skb = dev_alloc_skb(pkt_len+2); if (skb != NULL) { - skb->dev = dev; - skb_reserve(skb, 2); insw(ioaddr + AM2150_RCV, skb_put(skb, pkt_len), pkt_len>>1); if (pkt_len & 1) - *(skb->tail-1) = inb(ioaddr + AM2150_RCV); + *(skb_tail_pointer(skb) - 1) = inb(ioaddr + AM2150_RCV); skb->protocol = eth_type_trans(skb, dev); netif_rx(skb); /* Send the packet to the upper (protocol) layers. */ diff -puN drivers/net/pcmcia/smc91c92_cs.c~git-net drivers/net/pcmcia/smc91c92_cs.c --- a/drivers/net/pcmcia/smc91c92_cs.c~git-net +++ a/drivers/net/pcmcia/smc91c92_cs.c @@ -1669,7 +1669,6 @@ static void smc_rx(struct net_device *de (packet_length+1)>>1); skb->protocol = eth_type_trans(skb, dev); - skb->dev = dev; netif_rx(skb); dev->last_rx = jiffies; smc->stats.rx_packets++; diff -puN drivers/net/pcmcia/xirc2ps_cs.c~git-net drivers/net/pcmcia/xirc2ps_cs.c --- a/drivers/net/pcmcia/xirc2ps_cs.c~git-net +++ a/drivers/net/pcmcia/xirc2ps_cs.c @@ -1226,7 +1226,6 @@ xirc2ps_interrupt(int irq, void *dev_id) (pktlen+1)>>1); } skb->protocol = eth_type_trans(skb, dev); - skb->dev = dev; netif_rx(skb); dev->last_rx = jiffies; lp->stats.rx_packets++; diff -puN drivers/net/pcnet32.c~git-net drivers/net/pcnet32.c --- a/drivers/net/pcnet32.c~git-net +++ a/drivers/net/pcnet32.c @@ -1206,7 +1206,6 @@ static void pcnet32_rx_entry(struct net_ PCI_DMA_FROMDEVICE); skb_put(skb, pkt_len); lp->rx_skbuff[entry] = newskb; - newskb->dev = dev; lp->rx_dma_addr[entry] = pci_map_single(lp->pci_dev, newskb->data, diff -puN drivers/net/plip.c~git-net drivers/net/plip.c --- a/drivers/net/plip.c~git-net +++ a/drivers/net/plip.c @@ -546,7 +546,7 @@ static __be16 plip_type_trans(struct sk_ struct ethhdr *eth; unsigned char *rawp; - skb->mac.raw=skb->data; + skb_reset_mac_header(skb); skb_pull(skb,dev->hard_header_len); eth = eth_hdr(skb); diff -puN drivers/net/ppp_generic.c~git-net drivers/net/ppp_generic.c --- a/drivers/net/ppp_generic.c~git-net +++ a/drivers/net/ppp_generic.c @@ -1685,7 +1685,7 @@ ppp_receive_nonmp_frame(struct ppp *ppp, skb_pull_rcsum(skb, 2); skb->dev = ppp->dev; skb->protocol = htons(npindex_to_ethertype[npi]); - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); netif_rx(skb); ppp->dev->last_rx = jiffies; } diff -puN drivers/net/ppp_synctty.c~git-net drivers/net/ppp_synctty.c --- a/drivers/net/ppp_synctty.c~git-net +++ a/drivers/net/ppp_synctty.c @@ -594,7 +594,8 @@ ppp_sync_txmunge(struct syncppp *ap, str return NULL; } skb_reserve(npkt,2); - memcpy(skb_put(npkt,skb->len), skb->data, skb->len); + skb_copy_from_linear_data(skb, + skb_put(npkt, skb->len), skb->len); kfree_skb(skb); skb = npkt; } diff -puN drivers/net/pppoe.c~git-net drivers/net/pppoe.c --- a/drivers/net/pppoe.c~git-net +++ a/drivers/net/pppoe.c @@ -207,7 +207,7 @@ static inline struct pppox_sock *get_ite static inline struct pppox_sock *get_item_by_addr(struct sockaddr_pppox *sp) { - struct net_device *dev = NULL; + struct net_device *dev; int ifindex; dev = dev_get_by_name(sp->sa_addr.pppoe.dev); @@ -218,20 +218,6 @@ static inline struct pppox_sock *get_ite return get_item(sp->sa_addr.pppoe.sid, sp->sa_addr.pppoe.remote, ifindex); } -static inline int set_item(struct pppox_sock *po) -{ - int i; - - if (!po) - return -EINVAL; - - write_lock_bh(&pppoe_hash_lock); - i = __set_item(po); - write_unlock_bh(&pppoe_hash_lock); - - return i; -} - static inline struct pppox_sock *delete_item(unsigned long sid, char *addr, int ifindex) { struct pppox_sock *ret; @@ -255,54 +241,53 @@ static inline struct pppox_sock *delete_ static void pppoe_flush_dev(struct net_device *dev) { int hash; - BUG_ON(dev == NULL); - read_lock_bh(&pppoe_hash_lock); + write_lock_bh(&pppoe_hash_lock); for (hash = 0; hash < PPPOE_HASH_SIZE; hash++) { struct pppox_sock *po = item_hash_table[hash]; while (po != NULL) { - if (po->pppoe_dev == dev) { - struct sock *sk = sk_pppox(po); + struct sock *sk = sk_pppox(po); + if (po->pppoe_dev != dev) { + po = po->next; + continue; + } + po->pppoe_dev = NULL; + dev_put(dev); - sock_hold(sk); - po->pppoe_dev = NULL; - /* We hold a reference to SK, now drop the - * hash table lock so that we may attempt - * to lock the socket (which can sleep). - */ - read_unlock_bh(&pppoe_hash_lock); - - lock_sock(sk); - - if (sk->sk_state & - (PPPOX_CONNECTED | PPPOX_BOUND)) { - pppox_unbind_sock(sk); - dev_put(dev); - sk->sk_state = PPPOX_ZOMBIE; - sk->sk_state_change(sk); - } - - release_sock(sk); - - sock_put(sk); - - read_lock_bh(&pppoe_hash_lock); - - /* Now restart from the beginning of this - * hash chain. We always NULL out pppoe_dev - * so we are guaranteed to make forward - * progress. - */ - po = item_hash_table[hash]; - continue; + /* We always grab the socket lock, followed by the + * pppoe_hash_lock, in that order. Since we should + * hold the sock lock while doing any unbinding, + * we need to release the lock we're holding. + * Hold a reference to the sock so it doesn't disappear + * as we're jumping between locks. + */ + + sock_hold(sk); + + write_unlock_bh(&pppoe_hash_lock); + lock_sock(sk); + + if (sk->sk_state & (PPPOX_CONNECTED | PPPOX_BOUND)) { + pppox_unbind_sock(sk); + sk->sk_state = PPPOX_ZOMBIE; + sk->sk_state_change(sk); } - po = po->next; + + release_sock(sk); + sock_put(sk); + + /* Restart scan at the beginning of this hash chain. + * While the lock was dropped the chain contents may + * have changed. + */ + write_lock_bh(&pppoe_hash_lock); + po = item_hash_table[hash]; } } - read_unlock_bh(&pppoe_hash_lock); + write_unlock_bh(&pppoe_hash_lock); } static int pppoe_device_event(struct notifier_block *this, @@ -344,10 +329,10 @@ static struct notifier_block pppoe_notif static int pppoe_rcv_core(struct sock *sk, struct sk_buff *skb) { struct pppox_sock *po = pppox_sk(sk); - struct pppox_sock *relay_po = NULL; + struct pppox_sock *relay_po; if (sk->sk_state & PPPOX_BOUND) { - struct pppoe_hdr *ph = (struct pppoe_hdr *) skb->nh.raw; + struct pppoe_hdr *ph = pppoe_hdr(skb); int len = ntohs(ph->length); skb_pull_rcsum(skb, sizeof(struct pppoe_hdr)); if (pskb_trim_rcsum(skb, len)) @@ -401,7 +386,7 @@ static int pppoe_rcv(struct sk_buff *skb if (!(skb = skb_share_check(skb, GFP_ATOMIC))) goto out; - ph = (struct pppoe_hdr *) skb->nh.raw; + ph = pppoe_hdr(skb); po = get_item((unsigned long) ph->sid, eth_hdr(skb)->h_source, dev->ifindex); if (po != NULL) @@ -433,7 +418,7 @@ static int pppoe_disc_rcv(struct sk_buff if (!(skb = skb_share_check(skb, GFP_ATOMIC))) goto out; - ph = (struct pppoe_hdr *) skb->nh.raw; + ph = pppoe_hdr(skb); if (ph->code != PADT_CODE) goto abort; @@ -514,36 +499,49 @@ static int pppoe_release(struct socket * { struct sock *sk = sock->sk; struct pppox_sock *po; - int error = 0; if (!sk) return 0; - if (sock_flag(sk, SOCK_DEAD)) + lock_sock(sk); + if (sock_flag(sk, SOCK_DEAD)){ + release_sock(sk); return -EBADF; + } pppox_unbind_sock(sk); /* Signal the death of the socket. */ sk->sk_state = PPPOX_DEAD; + + /* Write lock on hash lock protects the entire "po" struct from + * concurrent updates via pppoe_flush_dev. The "po" struct should + * be considered part of the hash table contents, thus protected + * by the hash table lock */ + write_lock_bh(&pppoe_hash_lock); + po = pppox_sk(sk); if (po->pppoe_pa.sid) { - delete_item(po->pppoe_pa.sid, po->pppoe_pa.remote, po->pppoe_ifindex); + __delete_item(po->pppoe_pa.sid, + po->pppoe_pa.remote, po->pppoe_ifindex); } - if (po->pppoe_dev) + if (po->pppoe_dev) { dev_put(po->pppoe_dev); + po->pppoe_dev = NULL; + } - po->pppoe_dev = NULL; + write_unlock_bh(&pppoe_hash_lock); sock_orphan(sk); sock->sk = NULL; skb_queue_purge(&sk->sk_receive_queue); + release_sock(sk); sock_put(sk); - return error; + return 0; } @@ -599,14 +597,18 @@ static int pppoe_connect(struct socket * po->pppoe_dev = dev; po->pppoe_ifindex = dev->ifindex; - if (!(dev->flags & IFF_UP)) + write_lock_bh(&pppoe_hash_lock); + if (!(dev->flags & IFF_UP)){ + write_unlock_bh(&pppoe_hash_lock); goto err_put; + } memcpy(&po->pppoe_pa, &sp->sa_addr.pppoe, sizeof(struct pppoe_addr)); - error = set_item(po); + error = __set_item(po); + write_unlock_bh(&pppoe_hash_lock); if (error < 0) goto err_put; @@ -762,10 +764,10 @@ static int pppoe_ioctl(struct socket *so static int pppoe_sendmsg(struct kiocb *iocb, struct socket *sock, struct msghdr *m, size_t total_len) { - struct sk_buff *skb = NULL; + struct sk_buff *skb; struct sock *sk = sock->sk; struct pppox_sock *po = pppox_sk(sk); - int error = 0; + int error; struct pppoe_hdr hdr; struct pppoe_hdr *ph; struct net_device *dev; @@ -799,7 +801,7 @@ static int pppoe_sendmsg(struct kiocb *i /* Reserve space for headers. */ skb_reserve(skb, dev->hard_header_len); - skb->nh.raw = skb->data; + skb_reset_network_header(skb); skb->dev = dev; @@ -869,7 +871,8 @@ static int __pppoe_xmit(struct sock *sk, goto abort; skb_reserve(skb2, dev->hard_header_len + sizeof(struct pppoe_hdr)); - memcpy(skb_put(skb2, skb->len), skb->data, skb->len); + skb_copy_from_linear_data(skb, skb_put(skb2, skb->len), + skb->len); } else { /* Make a clone so as to not disturb the original skb, * give dev_queue_xmit something it can free. @@ -884,7 +887,7 @@ static int __pppoe_xmit(struct sock *sk, memcpy(ph, &hdr, sizeof(struct pppoe_hdr)); skb2->protocol = __constant_htons(ETH_P_PPP_SES); - skb2->nh.raw = skb2->data; + skb_reset_network_header(skb2); skb2->dev = dev; @@ -929,10 +932,8 @@ static int pppoe_recvmsg(struct kiocb *i struct msghdr *m, size_t total_len, int flags) { struct sock *sk = sock->sk; - struct sk_buff *skb = NULL; + struct sk_buff *skb; int error = 0; - int len; - struct pppoe_hdr *ph = NULL; if (sk->sk_state & PPPOX_BOUND) { error = -EIO; @@ -942,26 +943,21 @@ static int pppoe_recvmsg(struct kiocb *i skb = skb_recv_datagram(sk, flags & ~MSG_DONTWAIT, flags & MSG_DONTWAIT, &error); - if (error < 0) { + if (error < 0) goto end; - } m->msg_namelen = 0; if (skb) { - error = 0; - ph = (struct pppoe_hdr *) skb->nh.raw; - len = ntohs(ph->length); + struct pppoe_hdr *ph = pppoe_hdr(skb); + const int len = ntohs(ph->length); error = memcpy_toiovec(m->msg_iov, (unsigned char *) &ph->tag[0], len); - if (error < 0) - goto do_skb_free; - error = len; + if (error == 0) + error = len; } -do_skb_free: - if (skb) - kfree_skb(skb); + kfree_skb(skb); end: return error; } @@ -991,7 +987,7 @@ out: static __inline__ struct pppox_sock *pppoe_get_idx(loff_t pos) { - struct pppox_sock *po = NULL; + struct pppox_sock *po; int i = 0; for (; i < PPPOE_HASH_SIZE; i++) { diff -puN drivers/net/pppox.c~git-net drivers/net/pppox.c --- a/drivers/net/pppox.c~git-net +++ a/drivers/net/pppox.c @@ -58,7 +58,7 @@ void pppox_unbind_sock(struct sock *sk) { /* Clear connection to ppp device, if attached. */ - if (sk->sk_state & (PPPOX_BOUND | PPPOX_ZOMBIE)) { + if (sk->sk_state & (PPPOX_BOUND | PPPOX_CONNECTED | PPPOX_ZOMBIE)) { ppp_unregister_channel(&pppox_sk(sk)->chan); sk->sk_state = PPPOX_DEAD; } diff -puN drivers/net/qla3xxx.c~git-net drivers/net/qla3xxx.c --- a/drivers/net/qla3xxx.c~git-net +++ a/drivers/net/qla3xxx.c @@ -2151,7 +2151,6 @@ static void ql_process_mac_rx_intr(struc pci_unmap_len(lrg_buf_cb2, maplen), PCI_DMA_FROMDEVICE); prefetch(skb->data); - skb->dev = qdev->ndev; skb->ip_summed = CHECKSUM_NONE; skb->protocol = eth_type_trans(skb, qdev->ndev); @@ -2206,7 +2205,8 @@ static void ql_process_macip_rx_intr(str * Copy the ethhdr from first buffer to second. This * is necessary for 3022 IP completions. */ - memcpy(skb_push(skb2, size), skb1->data + VLAN_ID_LEN, size); + skb_copy_from_linear_data_offset(skb1, VLAN_ID_LEN, + skb_push(skb2, size), size); } else { u16 checksum = le16_to_cpu(ib_ip_rsp_ptr->checksum); if (checksum & @@ -2224,7 +2224,6 @@ static void ql_process_macip_rx_intr(str skb2->ip_summed = CHECKSUM_UNNECESSARY; } } - skb2->dev = qdev->ndev; skb2->protocol = eth_type_trans(skb2, qdev->ndev); netif_receive_skb(skb2); diff -puN drivers/net/r8169.c~git-net drivers/net/r8169.c --- a/drivers/net/r8169.c~git-net +++ a/drivers/net/r8169.c @@ -2284,7 +2284,7 @@ static inline u32 rtl8169_tso_csum(struc return LargeSend | ((mss & MSSMask) << MSSShift); } if (skb->ip_summed == CHECKSUM_PARTIAL) { - const struct iphdr *ip = skb->nh.iph; + const struct iphdr *ip = ip_hdr(skb); if (ip->protocol == IPPROTO_TCP) return IPCS | TCPCS; @@ -2586,7 +2586,6 @@ rtl8169_rx_interrupt(struct net_device * pci_action(tp->pci_dev, le64_to_cpu(desc->addr), tp->rx_buf_sz, PCI_DMA_FROMDEVICE); - skb->dev = dev; skb_put(skb, pkt_size); skb->protocol = eth_type_trans(skb, dev); diff -puN drivers/net/rionet.c~git-net drivers/net/rionet.c --- a/drivers/net/rionet.c~git-net +++ a/drivers/net/rionet.c @@ -115,7 +115,6 @@ static int rionet_rx_clean(struct net_de rnet->rx_skb[i]->data = data; skb_put(rnet->rx_skb[i], RIO_MAX_MSG_SIZE); - rnet->rx_skb[i]->dev = ndev; rnet->rx_skb[i]->protocol = eth_type_trans(rnet->rx_skb[i], ndev); error = netif_rx(rnet->rx_skb[i]); diff -puN drivers/net/rrunner.c~git-net drivers/net/rrunner.c --- a/drivers/net/rrunner.c~git-net +++ a/drivers/net/rrunner.c @@ -1029,7 +1029,6 @@ static void rx_int(struct net_device *de goto defer; } } - skb->dev = dev; skb->protocol = hippi_type_trans(skb, dev); netif_rx(skb); /* send it up */ @@ -1452,7 +1451,7 @@ static int rr_start_xmit(struct sk_buff } skb_reserve(new_skb, 8); skb_put(new_skb, len); - memcpy(new_skb->data, skb->data, len); + skb_copy_from_linear_data(skb, new_skb->data, len); dev_kfree_skb(skb); skb = new_skb; } diff -puN drivers/net/s2io.c~git-net drivers/net/s2io.c --- a/drivers/net/s2io.c~git-net +++ a/drivers/net/s2io.c @@ -2194,7 +2194,7 @@ static int fill_rxd_3buf(struct s2io_nic frag_list->next = NULL; tmp = (void *)ALIGN((long)frag_list->data, ALIGN_SIZE + 1); frag_list->data = tmp; - frag_list->tail = tmp; + skb_reset_tail_pointer(frag_list); /* Buffer-2 receives L4 data payload */ ((struct RxD3*)rxdp)->Buffer2_ptr = pci_map_single(nic->pdev, @@ -2356,7 +2356,7 @@ static int fill_rx_buffers(struct s2io_n tmp += ALIGN_SIZE; tmp &= ~ALIGN_SIZE; skb->data = (void *) (unsigned long)tmp; - skb->tail = (void *) (unsigned long)tmp; + skb_reset_tail_pointer(skb); if (!(((struct RxD3*)rxdp)->Buffer0_ptr)) ((struct RxD3*)rxdp)->Buffer0_ptr = diff -puN drivers/net/saa9730.c~git-net drivers/net/saa9730.c --- a/drivers/net/saa9730.c~git-net +++ a/drivers/net/saa9730.c @@ -688,7 +688,6 @@ static int lan_saa9730_rx(struct net_dev } else { lp->stats.rx_bytes += len; lp->stats.rx_packets++; - skb->dev = dev; skb_reserve(skb, 2); /* 16 byte align */ skb_put(skb, len); /* make room */ eth_copy_and_sum(skb, diff -puN drivers/net/sb1000.c~git-net drivers/net/sb1000.c --- a/drivers/net/sb1000.c~git-net +++ a/drivers/net/sb1000.c @@ -834,7 +834,7 @@ printk("cm0: IP identification: %02x%02x goto dropped_frame; } skb->dev = dev; - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); skb->protocol = (unsigned short) buffer[NewDatagramHeaderSkip + 16]; insw(ioaddr, skb_put(skb, NewDatagramDataSize), NewDatagramDataSize / 2); diff -puN drivers/net/sb1250-mac.c~git-net drivers/net/sb1250-mac.c --- a/drivers/net/sb1250-mac.c~git-net +++ a/drivers/net/sb1250-mac.c @@ -959,9 +959,6 @@ static int sbdma_add_rcvbuffer(sbmacdma_ } sbdma_align_skb(sb_new, SMP_CACHE_BYTES, ETHER_ALIGN); - - /* mark skbuff owned by our device */ - sb_new->dev = d->sbdma_eth->sbm_dev; } else { sb_new = sb; diff -puN drivers/net/sc92031.c~git-net drivers/net/sc92031.c --- a/drivers/net/sc92031.c~git-net +++ a/drivers/net/sc92031.c @@ -814,7 +814,6 @@ static void _sc92031_rx_tasklet(struct n memcpy(skb_put(skb, pkt_size), rx_ring + rx_ring_offset, pkt_size); } - skb->dev = dev; skb->protocol = eth_type_trans(skb, dev); dev->last_rx = jiffies; netif_rx(skb); diff -puN drivers/net/seeq8005.c~git-net drivers/net/seeq8005.c --- a/drivers/net/seeq8005.c~git-net +++ a/drivers/net/seeq8005.c @@ -550,7 +550,6 @@ static void seeq8005_rx(struct net_devic lp->stats.rx_dropped++; break; } - skb->dev = dev; skb_reserve(skb, 2); /* align data on 16 byte */ buf = skb_put(skb,pkt_len); diff -puN drivers/net/sgiseeq.c~git-net drivers/net/sgiseeq.c --- a/drivers/net/sgiseeq.c~git-net +++ a/drivers/net/sgiseeq.c @@ -318,7 +318,6 @@ static inline void sgiseeq_rx(struct net skb = dev_alloc_skb(len + 2); if (skb) { - skb->dev = dev; skb_reserve(skb, 2); skb_put(skb, len); @@ -535,7 +534,7 @@ static int sgiseeq_start_xmit(struct sk_ * entry and the HPC got to the end of the chain before we * added this new entry and restarted it. */ - memcpy((char *)(long)td->buf_vaddr, skb->data, skblen); + skb_copy_from_linear_data(skb, (char *)(long)td->buf_vaddr, skblen); if (len != skblen) memset((char *)(long)td->buf_vaddr + skb->len, 0, len-skblen); td->tdma.cntinfo = (len & HPCDMA_BCNT) | diff -puN drivers/net/sis190.c~git-net drivers/net/sis190.c --- a/drivers/net/sis190.c~git-net +++ a/drivers/net/sis190.c @@ -632,7 +632,6 @@ static int sis190_rx_interrupt(struct ne pci_action(tp->pci_dev, le32_to_cpu(desc->addr), tp->rx_buf_sz, PCI_DMA_FROMDEVICE); - skb->dev = dev; skb_put(skb, pkt_size); skb->protocol = eth_type_trans(skb, dev); diff -puN drivers/net/sis900.c~git-net drivers/net/sis900.c --- a/drivers/net/sis900.c~git-net +++ a/drivers/net/sis900.c @@ -1160,7 +1160,6 @@ sis900_init_rx_ring(struct net_device *n buffer */ break; } - skb->dev = net_dev; sis_priv->rx_skbuff[i] = skb; sis_priv->rx_ring[i].cmdsts = RX_BUF_SIZE; sis_priv->rx_ring[i].bufptr = pci_map_single(sis_priv->pci_dev, @@ -1803,7 +1802,6 @@ static int sis900_rx(struct net_device * sis_priv->cur_rx++; break; } - skb->dev = net_dev; sis_priv->rx_skbuff[entry] = skb; sis_priv->rx_ring[entry].cmdsts = RX_BUF_SIZE; sis_priv->rx_ring[entry].bufptr = @@ -1836,7 +1834,6 @@ static int sis900_rx(struct net_device * sis_priv->stats.rx_dropped++; break; } - skb->dev = net_dev; sis_priv->rx_skbuff[entry] = skb; sis_priv->rx_ring[entry].cmdsts = RX_BUF_SIZE; sis_priv->rx_ring[entry].bufptr = diff -puN drivers/net/sk98lin/skge.c~git-net drivers/net/sk98lin/skge.c --- a/drivers/net/sk98lin/skge.c~git-net +++ a/drivers/net/sk98lin/skge.c @@ -1562,10 +1562,10 @@ struct sk_buff *pMessage) /* pointer to pTxd->pMBuf = pMessage; if (pMessage->ip_summed == CHECKSUM_PARTIAL) { - u16 hdrlen = pMessage->h.raw - pMessage->data; + u16 hdrlen = skb_transport_offset(pMessage); u16 offset = hdrlen + pMessage->csum_offset; - if ((pMessage->h.ipiph->protocol == IPPROTO_UDP ) && + if ((ipip_hdr(pMessage)->protocol == IPPROTO_UDP) && (pAC->GIni.GIChipRev == 0) && (pAC->GIni.GIChipId == CHIP_ID_YUKON)) { pTxd->TBControl = BMU_TCP_CHECK; @@ -1681,7 +1681,7 @@ struct sk_buff *pMessage) /* pointer to ** Does the HW need to evaluate checksum for TCP or UDP packets? */ if (pMessage->ip_summed == CHECKSUM_PARTIAL) { - u16 hdrlen = pMessage->h.raw - pMessage->data; + u16 hdrlen = skb_transport_offset(pMessage); u16 offset = hdrlen + pMessage->csum_offset; Control = BMU_STFWD; @@ -1691,7 +1691,7 @@ struct sk_buff *pMessage) /* pointer to ** opcode for udp is not working in the hardware yet ** (Revision 2.0) */ - if ((pMessage->h.ipiph->protocol == IPPROTO_UDP ) && + if ((ipip_hdr(pMessage)->protocol == IPPROTO_UDP) && (pAC->GIni.GIChipRev == 0) && (pAC->GIni.GIChipId == CHIP_ID_YUKON)) { Control |= BMU_TCP_CHECK; @@ -2127,7 +2127,7 @@ rx_start: (dma_addr_t) PhysAddr, FrameLength, PCI_DMA_FROMDEVICE); - memcpy(pNewMsg->data, pMsg, FrameLength); + skb_copy_to_linear_data(pNewMsg, pMsg, FrameLength); pci_dma_sync_single_for_device(pAC->PciDev, (dma_addr_t) PhysAddr, @@ -2193,7 +2193,6 @@ rx_start: SK_PNMI_CNT_RX_OCTETS_DELIVERED(pAC, FrameLength, pRxPort->PortIndex); - pMsg->dev = pAC->dev[pRxPort->PortIndex]; pMsg->protocol = eth_type_trans(pMsg, pAC->dev[pRxPort->PortIndex]); netif_rx(pMsg); @@ -2246,7 +2245,6 @@ rx_start: (IFF_PROMISC | IFF_ALLMULTI)) != 0 || (ForRlmt & SK_RLMT_RX_PROTOCOL) == SK_RLMT_RX_PROTOCOL) { - pMsg->dev = pAC->dev[pRxPort->PortIndex]; pMsg->protocol = eth_type_trans(pMsg, pAC->dev[pRxPort->PortIndex]); netif_rx(pMsg); diff -puN drivers/net/skfp/skfddi.c~git-net drivers/net/skfp/skfddi.c --- a/drivers/net/skfp/skfddi.c~git-net +++ a/drivers/net/skfp/skfddi.c @@ -1680,7 +1680,6 @@ void mac_drv_rx_complete(struct s_smc *s rxd->rxd_os.skb = NULL; skb_trim(skb, len); skb->protocol = fddi_type_trans(skb, bp->dev); - skb->dev = bp->dev; /* pass up device pointer */ netif_rx(skb); bp->dev->last_rx = jiffies; @@ -1938,7 +1937,7 @@ int mac_drv_rx_init(struct s_smc *smc, i } skb_reserve(skb, 3); skb_put(skb, len); - memcpy(skb->data, look_ahead, len); + skb_copy_to_linear_data(skb, look_ahead, len); // deliver frame to system skb->protocol = fddi_type_trans(skb, smc->os.dev); diff -puN drivers/net/skge.c~git-net drivers/net/skge.c --- a/drivers/net/skge.c~git-net +++ a/drivers/net/skge.c @@ -2655,12 +2655,12 @@ static int skge_xmit_frame(struct sk_buf td->dma_hi = map >> 32; if (skb->ip_summed == CHECKSUM_PARTIAL) { - int offset = skb->h.raw - skb->data; + const int offset = skb_transport_offset(skb); /* This seems backwards, but it is what the sk98lin * does. Looks like hardware is wrong? */ - if (skb->h.ipiph->protocol == IPPROTO_UDP + if (ipip_hdr(skb)->protocol == IPPROTO_UDP && hw->chip_rev == 0 && hw->chip_id == CHIP_ID_YUKON) control = BMU_TCP_CHECK; else @@ -2950,7 +2950,7 @@ static struct sk_buff *skge_rx_get(struc pci_dma_sync_single_for_cpu(skge->hw->pdev, pci_unmap_addr(e, mapaddr), len, PCI_DMA_FROMDEVICE); - memcpy(skb->data, e->skb->data, len); + skb_copy_from_linear_data(e->skb, skb->data, len); pci_dma_sync_single_for_device(skge->hw->pdev, pci_unmap_addr(e, mapaddr), len, PCI_DMA_FROMDEVICE); diff -puN drivers/net/sky2.c~git-net drivers/net/sky2.c --- a/drivers/net/sky2.c~git-net +++ a/drivers/net/sky2.c @@ -32,6 +32,7 @@ #include #include #include +#include #include #include #include @@ -1391,8 +1392,8 @@ static int sky2_xmit_frame(struct sk_buf /* Check for TCP Segmentation Offload */ mss = skb_shinfo(skb)->gso_size; if (mss != 0) { - mss += ((skb->h.th->doff - 5) * 4); /* TCP options */ - mss += (skb->nh.iph->ihl * 4) + sizeof(struct tcphdr); + mss += tcp_optlen(skb); /* TCP options */ + mss += ip_hdrlen(skb) + sizeof(struct tcphdr); mss += ETH_HLEN; if (mss != sky2->tx_last_mss) { @@ -1420,14 +1421,14 @@ static int sky2_xmit_frame(struct sk_buf /* Handle TCP checksum offload */ if (skb->ip_summed == CHECKSUM_PARTIAL) { - unsigned offset = skb->h.raw - skb->data; + const unsigned offset = skb_transport_offset(skb); u32 tcpsum; tcpsum = offset << 16; /* sum start */ tcpsum |= offset + skb->csum_offset; /* sum write */ ctrl = CALSUM | WR_SUM | INIT_SUM | LOCK_SUM; - if (skb->nh.iph->protocol == IPPROTO_UDP) + if (ip_hdr(skb)->protocol == IPPROTO_UDP) ctrl |= UDPTCP; if (tcpsum != sky2->tx_tcpsum) { @@ -1970,7 +1971,7 @@ static struct sk_buff *receive_copy(stru skb_reserve(skb, 2); pci_dma_sync_single_for_cpu(sky2->hw->pdev, re->data_addr, length, PCI_DMA_FROMDEVICE); - memcpy(skb->data, re->skb->data, length); + skb_copy_from_linear_data(re->skb, skb->data, length); skb->ip_summed = re->skb->ip_summed; skb->csum = re->skb->csum; pci_dma_sync_single_for_device(sky2->hw->pdev, re->data_addr, diff -puN drivers/net/slip.c~git-net drivers/net/slip.c --- a/drivers/net/slip.c~git-net +++ a/drivers/net/slip.c @@ -363,7 +363,7 @@ sl_bump(struct slip *sl) } skb->dev = sl->dev; memcpy(skb_put(skb,count), sl->rbuff, count); - skb->mac.raw=skb->data; + skb_reset_mac_header(skb); skb->protocol=htons(ETH_P_IP); netif_rx(skb); sl->dev->last_rx = jiffies; diff -puN drivers/net/smc911x.c~git-net drivers/net/smc911x.c --- a/drivers/net/smc911x.c~git-net +++ a/drivers/net/smc911x.c @@ -502,7 +502,6 @@ static inline void smc911x_rcv(struct n DBG(SMC_DEBUG_PKTS, "%s: Received packet\n", dev->name,); PRINT_PKT(data, ((pkt_len - 4) <= 64) ? pkt_len - 4 : 64); dev->last_rx = jiffies; - skb->dev = dev; skb->protocol = eth_type_trans(skb, dev); netif_rx(skb); lp->stats.rx_packets++; @@ -1307,7 +1306,6 @@ smc911x_rx_dma_irq(int dma, void *data) lp->current_rx_skb = NULL; PRINT_PKT(skb->data, skb->len); dev->last_rx = jiffies; - skb->dev = dev; skb->protocol = eth_type_trans(skb, dev); netif_rx(skb); lp->stats.rx_packets++; diff -puN drivers/net/smc9194.c~git-net drivers/net/smc9194.c --- a/drivers/net/smc9194.c~git-net +++ a/drivers/net/smc9194.c @@ -1262,7 +1262,6 @@ static void smc_rcv(struct net_device *d skb_reserve( skb, 2 ); /* 16 bit alignment */ - skb->dev = dev; data = skb_put( skb, packet_length); #ifdef USE_32_BIT diff -puN drivers/net/smc91x.c~git-net drivers/net/smc91x.c --- a/drivers/net/smc91x.c~git-net +++ a/drivers/net/smc91x.c @@ -568,7 +568,6 @@ static inline void smc_rcv(struct net_d PRINT_PKT(data, packet_len - 4); dev->last_rx = jiffies; - skb->dev = dev; skb->protocol = eth_type_trans(skb, dev); netif_rx(skb); lp->stats.rx_packets++; diff -puN drivers/net/sonic.c~git-net drivers/net/sonic.c --- a/drivers/net/sonic.c~git-net +++ a/drivers/net/sonic.c @@ -85,7 +85,6 @@ static int sonic_open(struct net_device dev->name); return -ENOMEM; } - skb->dev = dev; /* align IP header unless DMA requires otherwise */ if (SONIC_BUS_SCALE(lp->dma_bitmode) == 2) skb_reserve(skb, 2); @@ -451,7 +450,6 @@ static void sonic_rx(struct net_device * lp->stats.rx_dropped++; break; } - new_skb->dev = dev; /* provide 16 byte IP header alignment unless DMA requires otherwise */ if(SONIC_BUS_SCALE(lp->dma_bitmode) == 2) skb_reserve(new_skb, 2); diff -puN drivers/net/spider_net.c~git-net drivers/net/spider_net.c --- a/drivers/net/spider_net.c~git-net +++ a/drivers/net/spider_net.c @@ -720,7 +720,7 @@ spider_net_prepare_tx_descr(struct spide spin_unlock_irqrestore(&chain->lock, flags); if (skb->protocol == htons(ETH_P_IP) && skb->ip_summed == CHECKSUM_PARTIAL) - switch (skb->nh.iph->protocol) { + switch (ip_hdr(skb)->protocol) { case IPPROTO_TCP: hwdescr->dmac_cmd_status |= SPIDER_NET_DMAC_TCP; break; @@ -990,7 +990,6 @@ spider_net_pass_skb_up(struct spider_net netdev = card->netdev; skb = descr->skb; - skb->dev = netdev; skb_put(skb, hwdescr->valid_size); /* the card seems to add 2 bytes of junk in front diff -puN drivers/net/starfire.c~git-net drivers/net/starfire.c --- a/drivers/net/starfire.c~git-net +++ a/drivers/net/starfire.c @@ -1452,7 +1452,6 @@ static int __netdev_rx(struct net_device to a minimally-sized skbuff. */ if (pkt_len < rx_copybreak && (skb = dev_alloc_skb(pkt_len + 2)) != NULL) { - skb->dev = dev; skb_reserve(skb, 2); /* 16 byte align the IP header */ pci_dma_sync_single_for_cpu(np->pci_dev, np->rx_info[entry].mapping, diff -puN drivers/net/sun3_82586.c~git-net drivers/net/sun3_82586.c --- a/drivers/net/sun3_82586.c~git-net +++ a/drivers/net/sun3_82586.c @@ -775,7 +775,6 @@ static void sun3_82586_rcv_int(struct ne skb = (struct sk_buff *) dev_alloc_skb(totlen+2); if(skb != NULL) { - skb->dev = dev; skb_reserve(skb,2); skb_put(skb,totlen); eth_copy_and_sum(skb,(char *) p->base+swab32((unsigned long) rbd->buffer),totlen,0); @@ -1027,7 +1026,7 @@ static int sun3_82586_send_packet(struct memset((char *)p->xmit_cbuffs[p->xmit_count], 0, ETH_ZLEN); len = ETH_ZLEN; } - memcpy((char *)p->xmit_cbuffs[p->xmit_count],(char *)(skb->data),skb->len); + skb_copy_from_linear_data(skb, p->xmit_cbuffs[p->xmit_count], skb->len); #if (NUM_XMIT_BUFFS == 1) # ifdef NO_NOPCOMMANDS diff -puN drivers/net/sun3lance.c~git-net drivers/net/sun3lance.c --- a/drivers/net/sun3lance.c~git-net +++ a/drivers/net/sun3lance.c @@ -629,7 +629,7 @@ static int lance_start_xmit( struct sk_b head->length = (-len) | 0xf000; head->misc = 0; - memcpy( PKTBUF_ADDR(head), (void *)skb->data, skb->len ); + skb_copy_from_linear_data(skb, PKTBUF_ADDR(head), skb->len); if (len != skb->len) memset(PKTBUF_ADDR(head) + skb->len, 0, len-skb->len); @@ -851,10 +851,9 @@ static int lance_rx( struct net_device * } - skb->dev = dev; skb_reserve( skb, 2 ); /* 16 byte align */ skb_put( skb, pkt_len ); /* Make room */ -// memcpy( skb->data, PKTBUF_ADDR(head), pkt_len ); +// skb_copy_to_linear_data(skb, PKTBUF_ADDR(head), pkt_len); eth_copy_and_sum(skb, PKTBUF_ADDR(head), pkt_len, 0); diff -puN drivers/net/sunbmac.c~git-net drivers/net/sunbmac.c --- a/drivers/net/sunbmac.c~git-net +++ a/drivers/net/sunbmac.c @@ -855,7 +855,6 @@ static void bigmac_rx(struct bigmac *bp) drops++; goto drop_it; } - copy_skb->dev = bp->dev; skb_reserve(copy_skb, 2); skb_put(copy_skb, len); sbus_dma_sync_single_for_cpu(bp->bigmac_sdev, diff -puN drivers/net/sundance.c~git-net drivers/net/sundance.c --- a/drivers/net/sundance.c~git-net +++ a/drivers/net/sundance.c @@ -1309,7 +1309,6 @@ static void rx_poll(unsigned long data) to a minimally-sized skbuff. */ if (pkt_len < rx_copybreak && (skb = dev_alloc_skb(pkt_len + 2)) != NULL) { - skb->dev = dev; skb_reserve(skb, 2); /* 16 byte align the IP header */ pci_dma_sync_single_for_cpu(np->pci_dev, desc->frag[0].addr, diff -puN drivers/net/sungem.c~git-net drivers/net/sungem.c --- a/drivers/net/sungem.c~git-net +++ a/drivers/net/sungem.c @@ -845,11 +845,10 @@ static int gem_rx(struct gem *gp, int wo goto drop_it; } - copy_skb->dev = gp->dev; skb_reserve(copy_skb, 2); skb_put(copy_skb, len); pci_dma_sync_single_for_cpu(gp->pdev, dma_addr, len, PCI_DMA_FROMDEVICE); - memcpy(copy_skb->data, skb->data, len); + skb_copy_from_linear_data(skb, copy_skb->data, len); pci_dma_sync_single_for_device(gp->pdev, dma_addr, len, PCI_DMA_FROMDEVICE); /* We'll reuse the original ring buffer. */ @@ -1029,10 +1028,8 @@ static int gem_start_xmit(struct sk_buff ctrl = 0; if (skb->ip_summed == CHECKSUM_PARTIAL) { - u64 csum_start_off, csum_stuff_off; - - csum_start_off = (u64) (skb->h.raw - skb->data); - csum_stuff_off = csum_start_off + skb->csum_offset; + const u64 csum_start_off = skb_transport_offset(skb); + const u64 csum_stuff_off = csum_start_off + skb->csum_offset; ctrl = (TXDCTRL_CENAB | (csum_start_off << 15) | diff -puN drivers/net/sunhme.c~git-net drivers/net/sunhme.c --- a/drivers/net/sunhme.c~git-net +++ a/drivers/net/sunhme.c @@ -2058,11 +2058,10 @@ static void happy_meal_rx(struct happy_m goto drop_it; } - copy_skb->dev = dev; skb_reserve(copy_skb, 2); skb_put(copy_skb, len); hme_dma_sync_for_cpu(hp, dma_addr, len, DMA_FROMDEVICE); - memcpy(copy_skb->data, skb->data, len); + skb_copy_from_linear_data(skb, copy_skb->data, len); hme_dma_sync_for_device(hp, dma_addr, len, DMA_FROMDEVICE); /* Reuse original ring buffer. */ @@ -2270,10 +2269,8 @@ static int happy_meal_start_xmit(struct tx_flags = TXFLAG_OWN; if (skb->ip_summed == CHECKSUM_PARTIAL) { - u32 csum_start_off, csum_stuff_off; - - csum_start_off = (u32) (skb->h.raw - skb->data); - csum_stuff_off = csum_start_off + skb->csum_offset; + const u32 csum_start_off = skb_transport_offset(skb); + const u32 csum_stuff_off = csum_start_off + skb->csum_offset; tx_flags = (TXFLAG_OWN | TXFLAG_CSENABLE | ((csum_start_off << 14) & TXFLAG_CSBUFBEGIN) | diff -puN drivers/net/sunlance.c~git-net drivers/net/sunlance.c --- a/drivers/net/sunlance.c~git-net +++ a/drivers/net/sunlance.c @@ -547,7 +547,6 @@ static void lance_rx_dvma(struct net_dev lp->stats.rx_bytes += len; - skb->dev = dev; skb_reserve(skb, 2); /* 16 byte align */ skb_put(skb, len); /* make room */ eth_copy_and_sum(skb, @@ -721,7 +720,6 @@ static void lance_rx_pio(struct net_devi lp->stats.rx_bytes += len; - skb->dev = dev; skb_reserve (skb, 2); /* 16 byte align */ skb_put(skb, len); /* make room */ lance_piocopy_to_skb(skb, &(ib->rx_buf[entry][0]), len); @@ -1145,7 +1143,7 @@ static int lance_start_xmit(struct sk_bu struct lance_init_block *ib = lp->init_block_mem; ib->btx_ring [entry].length = (-len) | 0xf000; ib->btx_ring [entry].misc = 0; - memcpy((char *)&ib->tx_buf [entry][0], skb->data, skblen); + skb_copy_from_linear_data(skb, &ib->tx_buf [entry][0], skblen); if (len != skblen) memset((char *) &ib->tx_buf [entry][skblen], 0, len - skblen); ib->btx_ring [entry].tmd1_bits = (LE_T1_POK | LE_T1_OWN); diff -puN drivers/net/sunqe.c~git-net drivers/net/sunqe.c --- a/drivers/net/sunqe.c~git-net +++ a/drivers/net/sunqe.c @@ -437,7 +437,6 @@ static void qe_rx(struct sunqe *qep) drops++; qep->net_stats.rx_dropped++; } else { - skb->dev = qep->dev; skb_reserve(skb, 2); skb_put(skb, len); eth_copy_and_sum(skb, (unsigned char *) this_qbuf, @@ -593,7 +592,7 @@ static int qe_start_xmit(struct sk_buff /* Avoid a race... */ qep->qe_block->qe_txd[entry].tx_flags = TXD_UPDATE; - memcpy(txbuf, skb->data, len); + skb_copy_from_linear_data(skb, txbuf, len); qep->qe_block->qe_txd[entry].tx_addr = txbuf_dvma; qep->qe_block->qe_txd[entry].tx_flags = diff -puN drivers/net/tc35815.c~git-net drivers/net/tc35815.c --- a/drivers/net/tc35815.c~git-net +++ a/drivers/net/tc35815.c @@ -1493,7 +1493,6 @@ tc35815_rx(struct net_device *dev) break; } skb_reserve(skb, 2); /* 16 bit alignment */ - skb->dev = dev; data = skb_put(skb, pkt_len); diff -puN drivers/net/tg3.c~git-net drivers/net/tg3.c --- a/drivers/net/tg3.c~git-net +++ a/drivers/net/tg3.c @@ -40,6 +40,7 @@ #include #include +#include #include #include @@ -3349,7 +3350,7 @@ static int tg3_rx(struct tg3 *tp, int bu skb_reserve(copy_skb, 2); skb_put(copy_skb, len); pci_dma_sync_single_for_cpu(tp->pdev, dma_addr, len, PCI_DMA_FROMDEVICE); - memcpy(copy_skb->data, skb->data, len); + skb_copy_from_linear_data(skb, copy_skb->data, len); pci_dma_sync_single_for_device(tp->pdev, dma_addr, len, PCI_DMA_FROMDEVICE); /* We'll reuse the original ring buffer. */ @@ -3908,20 +3909,20 @@ static int tg3_start_xmit(struct sk_buff if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6) mss |= (skb_headlen(skb) - ETH_HLEN) << 9; else { - tcp_opt_len = ((skb->h.th->doff - 5) * 4); - ip_tcp_len = (skb->nh.iph->ihl * 4) + - sizeof(struct tcphdr); - - skb->nh.iph->check = 0; - skb->nh.iph->tot_len = htons(mss + ip_tcp_len + - tcp_opt_len); + struct iphdr *iph = ip_hdr(skb); + + tcp_opt_len = tcp_optlen(skb); + ip_tcp_len = ip_hdrlen(skb) + sizeof(struct tcphdr); + + iph->check = 0; + iph->tot_len = htons(mss + ip_tcp_len + tcp_opt_len); mss |= (ip_tcp_len + tcp_opt_len) << 9; } base_flags |= (TXD_FLAG_CPU_PRE_DMA | TXD_FLAG_CPU_POST_DMA); - skb->h.th->check = 0; + tcp_hdr(skb)->check = 0; } else if (skb->ip_summed == CHECKSUM_PARTIAL) @@ -4055,6 +4056,7 @@ static int tg3_start_xmit_dma_bug(struct mss = 0; if (skb->len > (tp->dev->mtu + ETH_HLEN) && (mss = skb_shinfo(skb)->gso_size) != 0) { + struct iphdr *iph; int tcp_opt_len, ip_tcp_len, hdr_len; if (skb_header_cloned(skb) && @@ -4063,8 +4065,8 @@ static int tg3_start_xmit_dma_bug(struct goto out_unlock; } - tcp_opt_len = ((skb->h.th->doff - 5) * 4); - ip_tcp_len = (skb->nh.iph->ihl * 4) + sizeof(struct tcphdr); + tcp_opt_len = tcp_optlen(skb); + ip_tcp_len = ip_hdrlen(skb) + sizeof(struct tcphdr); hdr_len = ip_tcp_len + tcp_opt_len; if (unlikely((ETH_HLEN + hdr_len) > 80) && @@ -4074,34 +4076,31 @@ static int tg3_start_xmit_dma_bug(struct base_flags |= (TXD_FLAG_CPU_PRE_DMA | TXD_FLAG_CPU_POST_DMA); - skb->nh.iph->check = 0; - skb->nh.iph->tot_len = htons(mss + hdr_len); + iph = ip_hdr(skb); + iph->check = 0; + iph->tot_len = htons(mss + hdr_len); if (tp->tg3_flags2 & TG3_FLG2_HW_TSO) { - skb->h.th->check = 0; + tcp_hdr(skb)->check = 0; base_flags &= ~TXD_FLAG_TCPUDP_CSUM; - } - else { - skb->h.th->check = - ~csum_tcpudp_magic(skb->nh.iph->saddr, - skb->nh.iph->daddr, - 0, IPPROTO_TCP, 0); - } + } else + tcp_hdr(skb)->check = ~csum_tcpudp_magic(iph->saddr, + iph->daddr, 0, + IPPROTO_TCP, + 0); if ((tp->tg3_flags2 & TG3_FLG2_HW_TSO) || (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5705)) { - if (tcp_opt_len || skb->nh.iph->ihl > 5) { + if (tcp_opt_len || iph->ihl > 5) { int tsflags; - tsflags = ((skb->nh.iph->ihl - 5) + - (tcp_opt_len >> 2)); + tsflags = (iph->ihl - 5) + (tcp_opt_len >> 2); mss |= (tsflags << 11); } } else { - if (tcp_opt_len || skb->nh.iph->ihl > 5) { + if (tcp_opt_len || iph->ihl > 5) { int tsflags; - tsflags = ((skb->nh.iph->ihl - 5) + - (tcp_opt_len >> 2)); + tsflags = (iph->ihl - 5) + (tcp_opt_len >> 2); base_flags |= tsflags << 12; } } diff -puN drivers/net/tlan.c~git-net drivers/net/tlan.c --- a/drivers/net/tlan.c~git-net +++ a/drivers/net/tlan.c @@ -1112,7 +1112,7 @@ static int TLan_StartTx( struct sk_buff if ( bbuf ) { tail_buffer = priv->txBuffer + ( priv->txTail * TLAN_MAX_FRAME_SIZE ); - memcpy( tail_buffer, skb->data, skb->len ); + skb_copy_from_linear_data(skb, tail_buffer, skb->len); } else { tail_list->buffer[0].address = pci_map_single(priv->pciDev, skb->data, skb->len, PCI_DMA_TODEVICE); TLan_StoreSKB(tail_list, skb); @@ -1577,7 +1577,6 @@ u32 TLan_HandleRxEOF( struct net_device printk(KERN_INFO "TLAN: Couldn't allocate memory for received data.\n"); else { head_buffer = priv->rxBuffer + (priv->rxHead * TLAN_MAX_FRAME_SIZE); - skb->dev = dev; skb_reserve(skb, 2); t = (void *) skb_put(skb, frameSize); @@ -1608,7 +1607,6 @@ u32 TLan_HandleRxEOF( struct net_device skb->protocol = eth_type_trans( skb, dev ); netif_rx( skb ); - new_skb->dev = dev; skb_reserve( new_skb, 2 ); t = (void *) skb_put( new_skb, TLAN_MAX_FRAME_SIZE ); head_list->buffer[0].address = pci_map_single(priv->pciDev, new_skb->data, TLAN_MAX_FRAME_SIZE, PCI_DMA_FROMDEVICE); diff -puN drivers/net/tokenring/3c359.c~git-net drivers/net/tokenring/3c359.c --- a/drivers/net/tokenring/3c359.c~git-net +++ a/drivers/net/tokenring/3c359.c @@ -933,20 +933,21 @@ static void xl_rx(struct net_device *dev return ; } - skb->dev = dev ; - while (xl_priv->rx_ring_tail != temp_ring_loc) { copy_len = xl_priv->xl_rx_ring[xl_priv->rx_ring_tail].upfraglen & 0x7FFF ; frame_length -= copy_len ; pci_dma_sync_single_for_cpu(xl_priv->pdev,xl_priv->xl_rx_ring[xl_priv->rx_ring_tail].upfragaddr,xl_priv->pkt_buf_sz,PCI_DMA_FROMDEVICE) ; - memcpy(skb_put(skb,copy_len), xl_priv->rx_ring_skb[xl_priv->rx_ring_tail]->data, copy_len) ; + skb_copy_from_linear_data(xl_priv->rx_ring_skb[xl_priv->rx_ring_tail], + skb_put(skb, copy_len), + copy_len); pci_dma_sync_single_for_device(xl_priv->pdev,xl_priv->xl_rx_ring[xl_priv->rx_ring_tail].upfragaddr,xl_priv->pkt_buf_sz,PCI_DMA_FROMDEVICE) ; adv_rx_ring(dev) ; } /* Now we have found the last fragment */ pci_dma_sync_single_for_cpu(xl_priv->pdev,xl_priv->xl_rx_ring[xl_priv->rx_ring_tail].upfragaddr,xl_priv->pkt_buf_sz,PCI_DMA_FROMDEVICE) ; - memcpy(skb_put(skb,copy_len), xl_priv->rx_ring_skb[xl_priv->rx_ring_tail]->data, frame_length) ; + skb_copy_from_linear_data(xl_priv->rx_ring_skb[xl_priv->rx_ring_tail], + skb_put(skb,copy_len), frame_length); /* memcpy(skb_put(skb,frame_length), bus_to_virt(xl_priv->xl_rx_ring[xl_priv->rx_ring_tail].upfragaddr), frame_length) ; */ pci_dma_sync_single_for_device(xl_priv->pdev,xl_priv->xl_rx_ring[xl_priv->rx_ring_tail].upfragaddr,xl_priv->pkt_buf_sz,PCI_DMA_FROMDEVICE) ; adv_rx_ring(dev) ; @@ -967,8 +968,6 @@ static void xl_rx(struct net_device *dev return ; } - skb->dev = dev ; - skb2 = xl_priv->rx_ring_skb[xl_priv->rx_ring_tail] ; pci_unmap_single(xl_priv->pdev, xl_priv->xl_rx_ring[xl_priv->rx_ring_tail].upfragaddr, xl_priv->pkt_buf_sz,PCI_DMA_FROMDEVICE) ; skb_put(skb2, frame_length) ; diff -puN drivers/net/tokenring/ibmtr.c~git-net drivers/net/tokenring/ibmtr.c --- a/drivers/net/tokenring/ibmtr.c~git-net +++ a/drivers/net/tokenring/ibmtr.c @@ -1771,7 +1771,6 @@ static void tr_rx(struct net_device *dev /*BMS again, if she comes in with few but leaves with many */ skb_reserve(skb, sizeof(struct trh_hdr) - lan_hdr_len); skb_put(skb, length); - skb->dev = dev; data = skb->data; rbuffer_len = ntohs(readw(rbuf + offsetof(struct rec_buf, buf_len))); rbufdata = rbuf + offsetof(struct rec_buf, data); diff -puN drivers/net/tokenring/lanstreamer.c~git-net drivers/net/tokenring/lanstreamer.c --- a/drivers/net/tokenring/lanstreamer.c~git-net +++ a/drivers/net/tokenring/lanstreamer.c @@ -944,8 +944,6 @@ static void streamer_rx(struct net_devic printk(KERN_WARNING "%s: Not enough memory to copy packet to upper layers. \n", dev->name); streamer_priv->streamer_stats.rx_dropped++; } else { /* we allocated an skb OK */ - skb->dev = dev; - if (buffer_cnt == 1) { /* release the DMA mapping */ pci_unmap_single(streamer_priv->pci_dev, @@ -1607,10 +1605,11 @@ static void streamer_arb_cmd(struct net_ frame_data, buffer_len); } while (next_ptr && (buff_off = next_ptr)); + mac_frame->protocol = tr_type_trans(mac_frame, dev); #if STREAMER_NETWORK_MONITOR printk(KERN_WARNING "%s: Received MAC Frame, details: \n", dev->name); - mac_hdr = (struct trh_hdr *) mac_frame->data; + mac_hdr = tr_hdr(mac_frame); printk(KERN_WARNING "%s: MAC Frame Dest. Addr: %02x:%02x:%02x:%02x:%02x:%02x \n", dev->name, mac_hdr->daddr[0], mac_hdr->daddr[1], @@ -1622,8 +1621,6 @@ static void streamer_arb_cmd(struct net_ mac_hdr->saddr[2], mac_hdr->saddr[3], mac_hdr->saddr[4], mac_hdr->saddr[5]); #endif - mac_frame->dev = dev; - mac_frame->protocol = tr_type_trans(mac_frame, dev); netif_rx(mac_frame); /* Now tell the card we have dealt with the received frame */ diff -puN drivers/net/tokenring/olympic.c~git-net drivers/net/tokenring/olympic.c --- a/drivers/net/tokenring/olympic.c~git-net +++ a/drivers/net/tokenring/olympic.c @@ -814,8 +814,6 @@ static void olympic_rx(struct net_device olympic_priv->rx_ring_last_received += i ; olympic_priv->rx_ring_last_received &= (OLYMPIC_RX_RING_SIZE -1) ; } else { - skb->dev = dev ; - /* Optimise based upon number of buffers used. If only one buffer is used we can simply swap the buffers around. If more than one then we must use the new buffer and copy the information @@ -847,7 +845,9 @@ static void olympic_rx(struct net_device pci_dma_sync_single_for_cpu(olympic_priv->pdev, le32_to_cpu(olympic_priv->olympic_rx_ring[rx_ring_last_received].buffer), olympic_priv->pkt_buf_sz,PCI_DMA_FROMDEVICE) ; - memcpy(skb_put(skb,length-4),olympic_priv->rx_ring_skb[rx_ring_last_received]->data,length-4) ; + skb_copy_from_linear_data(olympic_priv->rx_ring_skb[rx_ring_last_received], + skb_put(skb,length - 4), + length - 4); pci_dma_sync_single_for_device(olympic_priv->pdev, le32_to_cpu(olympic_priv->olympic_rx_ring[rx_ring_last_received].buffer), olympic_priv->pkt_buf_sz,PCI_DMA_FROMDEVICE) ; @@ -864,7 +864,9 @@ static void olympic_rx(struct net_device olympic_priv->pkt_buf_sz,PCI_DMA_FROMDEVICE) ; rx_desc = &(olympic_priv->olympic_rx_ring[rx_ring_last_received]); cpy_length = (i == 1 ? frag_len : le32_to_cpu(rx_desc->res_length)); - memcpy(skb_put(skb, cpy_length), olympic_priv->rx_ring_skb[rx_ring_last_received]->data, cpy_length) ; + skb_copy_from_linear_data(olympic_priv->rx_ring_skb[rx_ring_last_received], + skb_put(skb, cpy_length), + cpy_length); pci_dma_sync_single_for_device(olympic_priv->pdev, le32_to_cpu(olympic_priv->olympic_rx_ring[rx_ring_last_received].buffer), olympic_priv->pkt_buf_sz,PCI_DMA_FROMDEVICE) ; @@ -1440,16 +1442,16 @@ static void olympic_arb_cmd(struct net_d next_ptr=readw(buf_ptr+offsetof(struct mac_receive_buffer,next)); } while (next_ptr && (buf_ptr=olympic_priv->olympic_lap + ntohs(next_ptr))); + mac_frame->protocol = tr_type_trans(mac_frame, dev); + if (olympic_priv->olympic_network_monitor) { struct trh_hdr *mac_hdr ; printk(KERN_WARNING "%s: Received MAC Frame, details: \n",dev->name) ; - mac_hdr = (struct trh_hdr *)mac_frame->data ; + mac_hdr = tr_hdr(mac_frame); printk(KERN_WARNING "%s: MAC Frame Dest. Addr: %02x:%02x:%02x:%02x:%02x:%02x \n", dev->name , mac_hdr->daddr[0], mac_hdr->daddr[1], mac_hdr->daddr[2], mac_hdr->daddr[3], mac_hdr->daddr[4], mac_hdr->daddr[5]) ; printk(KERN_WARNING "%s: MAC Frame Srce. Addr: %02x:%02x:%02x:%02x:%02x:%02x \n", dev->name , mac_hdr->saddr[0], mac_hdr->saddr[1], mac_hdr->saddr[2], mac_hdr->saddr[3], mac_hdr->saddr[4], mac_hdr->saddr[5]) ; } - mac_frame->dev = dev ; - mac_frame->protocol = tr_type_trans(mac_frame,dev); - netif_rx(mac_frame) ; + netif_rx(mac_frame); dev->last_rx = jiffies; drop_frame: diff -puN drivers/net/tokenring/smctr.c~git-net drivers/net/tokenring/smctr.c --- a/drivers/net/tokenring/smctr.c~git-net +++ a/drivers/net/tokenring/smctr.c @@ -3889,14 +3889,13 @@ static int smctr_process_rx_packet(MAC_H /* Slide data into a sleek skb. */ skb_put(skb, skb->len); - memcpy(skb->data, rmf, skb->len); + skb_copy_to_linear_data(skb, rmf, skb->len); /* Update Counters */ tp->MacStat.rx_packets++; tp->MacStat.rx_bytes += skb->len; /* Kick the packet on up. */ - skb->dev = dev; skb->protocol = tr_type_trans(skb, dev); netif_rx(skb); dev->last_rx = jiffies; @@ -4476,14 +4475,13 @@ static int smctr_rx_frame(struct net_dev if (skb) { skb_put(skb, rx_size); - memcpy(skb->data, pbuff, rx_size); + skb_copy_to_linear_data(skb, pbuff, rx_size); /* Update Counters */ tp->MacStat.rx_packets++; tp->MacStat.rx_bytes += skb->len; /* Kick the packet on up. */ - skb->dev = dev; skb->protocol = tr_type_trans(skb, dev); netif_rx(skb); dev->last_rx = jiffies; diff -puN drivers/net/tokenring/tms380tr.c~git-net drivers/net/tokenring/tms380tr.c --- a/drivers/net/tokenring/tms380tr.c~git-net +++ a/drivers/net/tokenring/tms380tr.c @@ -644,7 +644,7 @@ static int tms380tr_hardware_send_packet dmabuf = 0; i = tp->TplFree->TPLIndex; buf = tp->LocalTxBuffers[i]; - memcpy(buf, skb->data, length); + skb_copy_from_linear_data(skb, buf, length); newbuf = ((char *)buf - (char *)tp) + tp->dmabuffer; } else { @@ -2168,7 +2168,6 @@ static void tms380tr_rcv_status_irq(stru } else { - skb->dev = dev; skb_put(skb, tp->MaxPacketSize); rpl->SkbStat = SKB_DATA_COPY; ReceiveDataPtr = rpl->MData; @@ -2179,7 +2178,8 @@ static void tms380tr_rcv_status_irq(stru || rpl->SkbStat == SKB_DMA_DIRECT)) { if(rpl->SkbStat == SKB_DATA_COPY) - memcpy(skb->data, ReceiveDataPtr, Length); + skb_copy_to_linear_data(skb, ReceiveDataPtr, + Length); /* Deliver frame to system */ rpl->Skb = NULL; diff -puN drivers/net/tsi108_eth.c~git-net drivers/net/tsi108_eth.c --- a/drivers/net/tsi108_eth.c~git-net +++ a/drivers/net/tsi108_eth.c @@ -788,7 +788,6 @@ static int tsi108_complete_rx(struct net printk(".\n"); } - skb->dev = dev; skb_put(skb, data->rxring[rx].len); skb->protocol = eth_type_trans(skb, dev); netif_receive_skb(skb); diff -puN drivers/net/tulip/de2104x.c~git-net drivers/net/tulip/de2104x.c --- a/drivers/net/tulip/de2104x.c~git-net +++ a/drivers/net/tulip/de2104x.c @@ -435,7 +435,6 @@ static void de_rx (struct de_private *de rx_work = 100; goto rx_next; } - copy_skb->dev = de->dev; if (!copying_skb) { pci_unmap_single(de->pdev, mapping, @@ -450,8 +449,8 @@ static void de_rx (struct de_private *de } else { pci_dma_sync_single_for_cpu(de->pdev, mapping, len, PCI_DMA_FROMDEVICE); skb_reserve(copy_skb, RX_OFFSET); - memcpy(skb_put(copy_skb, len), skb->data, len); - + skb_copy_from_linear_data(skb, skb_put(copy_skb, len), + len); pci_dma_sync_single_for_device(de->pdev, mapping, len, PCI_DMA_FROMDEVICE); /* We'll reuse the original ring buffer. */ diff -puN drivers/net/tulip/de4x5.c~git-net drivers/net/tulip/de4x5.c --- a/drivers/net/tulip/de4x5.c~git-net +++ a/drivers/net/tulip/de4x5.c @@ -3634,7 +3634,6 @@ de4x5_alloc_rx_buff(struct net_device *d p = dev_alloc_skb(IEEE802_3_SZ + DE4X5_ALIGN + 2); if (!p) return NULL; - p->dev = dev; tmp = virt_to_bus(p->data); i = ((tmp + DE4X5_ALIGN) & ~DE4X5_ALIGN) - tmp; skb_reserve(p, i); @@ -3655,7 +3654,6 @@ de4x5_alloc_rx_buff(struct net_device *d p = dev_alloc_skb(len + 2); if (!p) return NULL; - p->dev = dev; skb_reserve(p, 2); /* Align */ if (index < lp->rx_old) { /* Wrapped buffer */ short tlen = (lp->rxRingSize - lp->rx_old) * RX_BUFF_SZ; diff -puN drivers/net/tulip/dmfe.c~git-net drivers/net/tulip/dmfe.c --- a/drivers/net/tulip/dmfe.c~git-net +++ a/drivers/net/tulip/dmfe.c @@ -686,7 +686,7 @@ static int dmfe_start_xmit(struct sk_buf /* transmit this packet */ txptr = db->tx_insert_ptr; - memcpy(txptr->tx_buf_ptr, skb->data, skb->len); + skb_copy_from_linear_data(skb, txptr->tx_buf_ptr, skb->len); txptr->tdes1 = cpu_to_le32(0xe1000000 | skb->len); /* Point to next transmit free descriptor */ @@ -992,14 +992,14 @@ static void dmfe_rx_packet(struct DEVICE skb = newskb; /* size less than COPY_SIZE, allocate a rxlen SKB */ - skb->dev = dev; skb_reserve(skb, 2); /* 16byte align */ - memcpy(skb_put(skb, rxlen), rxptr->rx_skb_ptr->data, rxlen); + skb_copy_from_linear_data(rxptr->rx_skb_ptr, + skb_put(skb, rxlen), + rxlen); dmfe_reuse_skb(db, rxptr->rx_skb_ptr); - } else { - skb->dev = dev; + } else skb_put(skb, rxlen); - } + skb->protocol = eth_type_trans(skb, dev); netif_rx(skb); dev->last_rx = jiffies; diff -puN drivers/net/tulip/interrupt.c~git-net drivers/net/tulip/interrupt.c --- a/drivers/net/tulip/interrupt.c~git-net +++ a/drivers/net/tulip/interrupt.c @@ -192,7 +192,6 @@ int tulip_poll(struct net_device *dev, i to a minimally-sized skbuff. */ if (pkt_len < tulip_rx_copybreak && (skb = dev_alloc_skb(pkt_len + 2)) != NULL) { - skb->dev = dev; skb_reserve(skb, 2); /* 16 byte align the IP header */ pci_dma_sync_single_for_cpu(tp->pdev, tp->rx_buffers[entry].mapping, @@ -416,7 +415,6 @@ static int tulip_rx(struct net_device *d to a minimally-sized skbuff. */ if (pkt_len < tulip_rx_copybreak && (skb = dev_alloc_skb(pkt_len + 2)) != NULL) { - skb->dev = dev; skb_reserve(skb, 2); /* 16 byte align the IP header */ pci_dma_sync_single_for_cpu(tp->pdev, tp->rx_buffers[entry].mapping, diff -puN drivers/net/tulip/uli526x.c~git-net drivers/net/tulip/uli526x.c --- a/drivers/net/tulip/uli526x.c~git-net +++ a/drivers/net/tulip/uli526x.c @@ -583,7 +583,7 @@ static int uli526x_start_xmit(struct sk_ /* transmit this packet */ txptr = db->tx_insert_ptr; - memcpy(txptr->tx_buf_ptr, skb->data, skb->len); + skb_copy_from_linear_data(skb, txptr->tx_buf_ptr, skb->len); txptr->tdes1 = cpu_to_le32(0xe1000000 | skb->len); /* Point to next transmit free descriptor */ @@ -828,14 +828,14 @@ static void uli526x_rx_packet(struct net ( (skb = dev_alloc_skb(rxlen + 2) ) != NULL) ) { /* size less than COPY_SIZE, allocate a rxlen SKB */ - skb->dev = dev; skb_reserve(skb, 2); /* 16byte align */ - memcpy(skb_put(skb, rxlen), rxptr->rx_skb_ptr->tail, rxlen); + memcpy(skb_put(skb, rxlen), + skb_tail_pointer(rxptr->rx_skb_ptr), + rxlen); uli526x_reuse_skb(db, rxptr->rx_skb_ptr); - } else { - skb->dev = dev; + } else skb_put(skb, rxlen); - } + skb->protocol = eth_type_trans(skb, dev); netif_rx(skb); dev->last_rx = jiffies; @@ -1177,7 +1177,10 @@ static void uli526x_reuse_skb(struct uli if (!(rxptr->rdes0 & cpu_to_le32(0x80000000))) { rxptr->rx_skb_ptr = skb; - rxptr->rdes2 = cpu_to_le32( pci_map_single(db->pdev, skb->tail, RX_ALLOC_SIZE, PCI_DMA_FROMDEVICE) ); + rxptr->rdes2 = cpu_to_le32(pci_map_single(db->pdev, + skb_tail_pointer(skb), + RX_ALLOC_SIZE, + PCI_DMA_FROMDEVICE)); wmb(); rxptr->rdes0 = cpu_to_le32(0x80000000); db->rx_avail_cnt++; @@ -1341,7 +1344,10 @@ static void allocate_rx_buffer(struct ul if ( ( skb = dev_alloc_skb(RX_ALLOC_SIZE) ) == NULL ) break; rxptr->rx_skb_ptr = skb; /* FIXME (?) */ - rxptr->rdes2 = cpu_to_le32( pci_map_single(db->pdev, skb->tail, RX_ALLOC_SIZE, PCI_DMA_FROMDEVICE) ); + rxptr->rdes2 = cpu_to_le32(pci_map_single(db->pdev, + skb_tail_pointer(skb), + RX_ALLOC_SIZE, + PCI_DMA_FROMDEVICE)); wmb(); rxptr->rdes0 = cpu_to_le32(0x80000000); rxptr = rxptr->next_rx_desc; diff -puN drivers/net/tulip/winbond-840.c~git-net drivers/net/tulip/winbond-840.c --- a/drivers/net/tulip/winbond-840.c~git-net +++ a/drivers/net/tulip/winbond-840.c @@ -813,7 +813,6 @@ static void init_rxtx_rings(struct net_d np->rx_skbuff[i] = skb; if (skb == NULL) break; - skb->dev = dev; /* Mark as being used by this device. */ np->rx_addr[i] = pci_map_single(np->pci_dev,skb->data, np->rx_buf_sz,PCI_DMA_FROMDEVICE); @@ -1229,7 +1228,6 @@ static int netdev_rx(struct net_device * to a minimally-sized skbuff. */ if (pkt_len < rx_copybreak && (skb = dev_alloc_skb(pkt_len + 2)) != NULL) { - skb->dev = dev; skb_reserve(skb, 2); /* 16 byte align the IP header */ pci_dma_sync_single_for_cpu(np->pci_dev,np->rx_addr[entry], np->rx_skbuff[entry]->len, @@ -1278,7 +1276,6 @@ static int netdev_rx(struct net_device * np->rx_skbuff[entry] = skb; if (skb == NULL) break; /* Better luck next round. */ - skb->dev = dev; /* Mark as being used by this device. */ np->rx_addr[entry] = pci_map_single(np->pci_dev, skb->data, np->rx_buf_sz, PCI_DMA_FROMDEVICE); diff -puN drivers/net/tulip/xircom_cb.c~git-net drivers/net/tulip/xircom_cb.c --- a/drivers/net/tulip/xircom_cb.c~git-net +++ a/drivers/net/tulip/xircom_cb.c @@ -411,9 +411,9 @@ static int xircom_start_xmit(struct sk_b sometimes sends more than you ask it to. */ memset(&card->tx_buffer[bufferoffsets[desc]/4],0,1536); - memcpy(&(card->tx_buffer[bufferoffsets[desc]/4]),skb->data,skb->len); - - + skb_copy_from_linear_data(skb, + &(card->tx_buffer[bufferoffsets[desc] / 4]), + skb->len); /* FIXME: The specification tells us that the length we send HAS to be a multiple of 4 bytes. */ @@ -1207,7 +1207,6 @@ static void investigate_read_descriptor( card->stats.rx_dropped++; goto out; } - skb->dev = dev; skb_reserve(skb, 2); eth_copy_and_sum(skb, (unsigned char*)&card->rx_buffer[bufferoffset / 4], pkt_len, 0); skb_put(skb, pkt_len); diff -puN drivers/net/tulip/xircom_tulip_cb.c~git-net drivers/net/tulip/xircom_tulip_cb.c --- a/drivers/net/tulip/xircom_tulip_cb.c~git-net +++ a/drivers/net/tulip/xircom_tulip_cb.c @@ -915,7 +915,9 @@ xircom_start_xmit(struct sk_buff *skb, s tp->tx_skbuff[entry] = skb; if (tp->chip_id == X3201_3) { - memcpy(tp->tx_aligned_skbuff[entry]->data,skb->data,skb->len); + skb_copy_from_linear_data(skb, + tp->tx_aligned_skbuff[entry]->data, + skb->len); tp->tx_ring[entry].buffer1 = virt_to_bus(tp->tx_aligned_skbuff[entry]->data); } else tp->tx_ring[entry].buffer1 = virt_to_bus(skb->data); @@ -1238,7 +1240,6 @@ xircom_rx(struct net_device *dev) to a minimally-sized skbuff. */ if (pkt_len < rx_copybreak && (skb = dev_alloc_skb(pkt_len + 2)) != NULL) { - skb->dev = dev; skb_reserve(skb, 2); /* 16 byte align the IP header */ #if ! defined(__alpha__) eth_copy_and_sum(skb, bus_to_virt(tp->rx_ring[entry].buffer1), diff -puN drivers/net/tun.c~git-net drivers/net/tun.c --- a/drivers/net/tun.c~git-net +++ a/drivers/net/tun.c @@ -254,11 +254,11 @@ static __inline__ ssize_t tun_get_user(s return -EFAULT; } - skb->dev = tun->dev; switch (tun->flags & TUN_TYPE_MASK) { case TUN_TUN_DEV: - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); skb->protocol = pi.proto; + skb->dev = tun->dev; break; case TUN_TAP_DEV: skb->protocol = eth_type_trans(skb, tun->dev); @@ -386,8 +386,8 @@ static ssize_t tun_chr_aio_read(struct k * - we are multicast promiscous. * - we belong to the multicast group. */ - memcpy(addr, skb->data, - min_t(size_t, sizeof addr, skb->len)); + skb_copy_from_linear_data(skb, addr, min_t(size_t, sizeof addr, + skb->len)); bit_nr = ether_crc(sizeof addr, addr) >> 26; if ((tun->if_flags & IFF_PROMISC) || memcmp(addr, tun->dev_addr, sizeof addr) == 0 || diff -puN drivers/net/typhoon.c~git-net drivers/net/typhoon.c --- a/drivers/net/typhoon.c~git-net +++ a/drivers/net/typhoon.c @@ -1708,7 +1708,6 @@ typhoon_rx(struct typhoon *tp, struct ba if(pkt_len < rx_copybreak && (new_skb = dev_alloc_skb(pkt_len + 2)) != NULL) { - new_skb->dev = tp->dev; skb_reserve(new_skb, 2); pci_dma_sync_single_for_cpu(tp->pdev, dma_addr, PKT_BUF_SZ, diff -puN drivers/net/via-rhine.c~git-net drivers/net/via-rhine.c --- a/drivers/net/via-rhine.c~git-net +++ a/drivers/net/via-rhine.c @@ -1486,7 +1486,6 @@ static int rhine_rx(struct net_device *d copying to a minimally-sized skbuff. */ if (pkt_len < rx_copybreak && (skb = dev_alloc_skb(pkt_len + 2)) != NULL) { - skb->dev = dev; skb_reserve(skb, 2); /* 16 byte align the IP header */ pci_dma_sync_single_for_cpu(rp->pdev, rp->rx_skbuff_dma[entry], diff -puN drivers/net/via-velocity.c~git-net drivers/net/via-velocity.c --- a/drivers/net/via-velocity.c~git-net +++ a/drivers/net/via-velocity.c @@ -1339,7 +1339,8 @@ static inline int velocity_rx_copy(struc if (vptr->flags & VELOCITY_FLAGS_IP_ALIGN) skb_reserve(new_skb, 2); - memcpy(new_skb->data, rx_skb[0]->data, pkt_size); + skb_copy_from_linear_data(rx_skb[0], new_skb->data, + pkt_size); *rx_skb = new_skb; ret = 0; } @@ -1398,7 +1399,6 @@ static int velocity_receive_frame(struct vptr->stats.multicast++; skb = rd_info->skb; - skb->dev = vptr->dev; pci_dma_sync_single_for_cpu(vptr->pdev, rd_info->skb_dma, vptr->rx_buf_sz, PCI_DMA_FROMDEVICE); @@ -1428,7 +1428,7 @@ static int velocity_receive_frame(struct PCI_DMA_FROMDEVICE); skb_put(skb, pkt_len - 4); - skb->protocol = eth_type_trans(skb, skb->dev); + skb->protocol = eth_type_trans(skb, vptr->dev); stats->rx_bytes += pkt_len; netif_rx(skb); @@ -1928,7 +1928,7 @@ static int velocity_xmit(struct sk_buff if (pktlen < ETH_ZLEN) { /* Cannot occur until ZC support */ pktlen = ETH_ZLEN; - memcpy(tdinfo->buf, skb->data, skb->len); + skb_copy_from_linear_data(skb, tdinfo->buf, skb->len); memset(tdinfo->buf + skb->len, 0, ETH_ZLEN - skb->len); tdinfo->skb = skb; tdinfo->skb_dma[0] = tdinfo->buf_dma; @@ -1944,7 +1944,7 @@ static int velocity_xmit(struct sk_buff int nfrags = skb_shinfo(skb)->nr_frags; tdinfo->skb = skb; if (nfrags > 6) { - memcpy(tdinfo->buf, skb->data, skb->len); + skb_copy_from_linear_data(skb, tdinfo->buf, skb->len); tdinfo->skb_dma[0] = tdinfo->buf_dma; td_ptr->tdesc0.pktsize = td_ptr->td_buf[0].pa_low = cpu_to_le32(tdinfo->skb_dma[0]); @@ -2007,7 +2007,7 @@ static int velocity_xmit(struct sk_buff */ if ((vptr->flags & VELOCITY_FLAGS_TX_CSUM) && (skb->ip_summed == CHECKSUM_PARTIAL)) { - struct iphdr *ip = skb->nh.iph; + const struct iphdr *ip = ip_hdr(skb); if (ip->protocol == IPPROTO_TCP) td_ptr->tdesc1.TCR |= TCR0_TCPCK; else if (ip->protocol == IPPROTO_UDP) diff -puN drivers/net/wan/cosa.c~git-net drivers/net/wan/cosa.c --- a/drivers/net/wan/cosa.c~git-net +++ a/drivers/net/wan/cosa.c @@ -773,7 +773,7 @@ static int sppp_rx_done(struct channel_d } chan->rx_skb->protocol = htons(ETH_P_WAN_PPP); chan->rx_skb->dev = chan->pppdev.dev; - chan->rx_skb->mac.raw = chan->rx_skb->data; + skb_reset_mac_header(chan->rx_skb); chan->stats.rx_packets++; chan->stats.rx_bytes += chan->cosa->rxsize; netif_rx(chan->rx_skb); diff -puN drivers/net/wan/cycx_x25.c~git-net drivers/net/wan/cycx_x25.c --- a/drivers/net/wan/cycx_x25.c~git-net +++ a/drivers/net/wan/cycx_x25.c @@ -834,7 +834,7 @@ static void cycx_x25_irq_rx(struct cycx_ ++chan->ifstats.rx_packets; chan->ifstats.rx_bytes += pktlen; - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); netif_rx(skb); dev->last_rx = jiffies; /* timestamp */ } diff -puN drivers/net/wan/dlci.c~git-net drivers/net/wan/dlci.c --- a/drivers/net/wan/dlci.c~git-net +++ a/drivers/net/wan/dlci.c @@ -176,7 +176,7 @@ static void dlci_receive(struct sk_buff if (process) { /* we've set up the protocol, so discard the header */ - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); skb_pull(skb, header); dlp->stats.rx_bytes += skb->len; netif_rx(skb); diff -puN drivers/net/wan/dscc4.c~git-net drivers/net/wan/dscc4.c --- a/drivers/net/wan/dscc4.c~git-net +++ a/drivers/net/wan/dscc4.c @@ -1904,7 +1904,8 @@ static struct sk_buff *dscc4_init_dummy_ struct TxFD *tx_fd = dpriv->tx_fd + last; skb->len = DUMMY_SKB_SIZE; - memcpy(skb->data, version, strlen(version)%DUMMY_SKB_SIZE); + skb_copy_to_linear_data(skb, version, + strlen(version) % DUMMY_SKB_SIZE); tx_fd->state = FrameEnd | TO_STATE_TX(DUMMY_SKB_SIZE); tx_fd->data = pci_map_single(dpriv->pci_priv->pdev, skb->data, DUMMY_SKB_SIZE, PCI_DMA_TODEVICE); diff -puN drivers/net/wan/farsync.c~git-net drivers/net/wan/farsync.c --- a/drivers/net/wan/farsync.c~git-net +++ a/drivers/net/wan/farsync.c @@ -864,7 +864,7 @@ fst_tx_dma_complete(struct fst_card_info static __be16 farsync_type_trans(struct sk_buff *skb, struct net_device *dev) { skb->dev = dev; - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); skb->pkt_type = PACKET_HOST; return htons(ETH_P_CUST); } diff -puN drivers/net/wan/hdlc_cisco.c~git-net drivers/net/wan/hdlc_cisco.c --- a/drivers/net/wan/hdlc_cisco.c~git-net +++ a/drivers/net/wan/hdlc_cisco.c @@ -124,7 +124,7 @@ static void cisco_keepalive_send(struct skb_put(skb, sizeof(struct cisco_packet)); skb->priority = TC_PRIO_CONTROL; skb->dev = dev; - skb->nh.raw = skb->data; + skb_reset_network_header(skb); dev_queue_xmit(skb); } diff -puN drivers/net/wan/hdlc_fr.c~git-net drivers/net/wan/hdlc_fr.c --- a/drivers/net/wan/hdlc_fr.c~git-net +++ a/drivers/net/wan/hdlc_fr.c @@ -533,7 +533,7 @@ static void fr_lmi_send(struct net_devic skb->protocol = __constant_htons(NLPID_CCITT_ANSI_LMI); fr_hard_header(&skb, LMI_CCITT_ANSI_DLCI); } - data = skb->tail; + data = skb_tail_pointer(skb); data[i++] = LMI_CALLREF; data[i++] = dce ? LMI_STATUS : LMI_STATUS_ENQUIRY; if (lmi == LMI_ANSI) @@ -590,7 +590,7 @@ static void fr_lmi_send(struct net_devic skb_put(skb, i); skb->priority = TC_PRIO_CONTROL; skb->dev = dev; - skb->nh.raw = skb->data; + skb_reset_network_header(skb); dev_queue_xmit(skb); } @@ -1011,7 +1011,6 @@ static int fr_rx(struct sk_buff *skb) stats->rx_bytes += skb->len; if (pvc->state.becn) stats->rx_compressed++; - skb->dev = dev; netif_rx(skb); return NET_RX_SUCCESS; } else { diff -puN drivers/net/wan/hostess_sv11.c~git-net drivers/net/wan/hostess_sv11.c --- a/drivers/net/wan/hostess_sv11.c~git-net +++ a/drivers/net/wan/hostess_sv11.c @@ -59,7 +59,7 @@ static void hostess_input(struct z8530_c /* Drop the CRC - it's not a good idea to try and negotiate it ;) */ skb_trim(skb, skb->len-2); skb->protocol=__constant_htons(ETH_P_WAN_PPP); - skb->mac.raw=skb->data; + skb_reset_mac_header(skb); skb->dev=c->netdevice; /* * Send it to the PPP layer. We don't have time to process diff -puN drivers/net/wan/lmc/lmc_main.c~git-net drivers/net/wan/lmc/lmc_main.c --- a/drivers/net/wan/lmc/lmc_main.c~git-net +++ a/drivers/net/wan/lmc/lmc_main.c @@ -1636,7 +1636,7 @@ static int lmc_rx (struct net_device *de if (nsb) { sc->lmc_rxq[i] = nsb; nsb->dev = dev; - sc->lmc_rxring[i].buffer1 = virt_to_bus (nsb->tail); + sc->lmc_rxring[i].buffer1 = virt_to_bus(skb_tail_pointer(nsb)); } sc->failed_recv_alloc = 1; goto skip_packet; @@ -1667,8 +1667,8 @@ static int lmc_rx (struct net_device *de skb_put (skb, len); skb->protocol = lmc_proto_type(sc, skb); skb->protocol = htons(ETH_P_WAN_PPP); - skb->mac.raw = skb->data; -// skb->nh.raw = skb->data; + skb_reset_mac_header(skb); + /* skb_reset_network_header(skb); */ skb->dev = dev; lmc_proto_netif(sc, skb); @@ -1679,7 +1679,7 @@ static int lmc_rx (struct net_device *de if (nsb) { sc->lmc_rxq[i] = nsb; nsb->dev = dev; - sc->lmc_rxring[i].buffer1 = virt_to_bus (nsb->tail); + sc->lmc_rxring[i].buffer1 = virt_to_bus(skb_tail_pointer(nsb)); /* Transferred to 21140 below */ } else { @@ -1702,11 +1702,11 @@ static int lmc_rx (struct net_device *de if(!nsb) { goto give_it_anyways; } - memcpy(skb_put(nsb, len), skb->data, len); + skb_copy_from_linear_data(skb, skb_put(nsb, len), len); nsb->protocol = lmc_proto_type(sc, skb); - nsb->mac.raw = nsb->data; -// nsb->nh.raw = nsb->data; + skb_reset_mac_header(nsb); + /* skb_reset_network_header(nsb); */ nsb->dev = dev; lmc_proto_netif(sc, nsb); } @@ -1932,7 +1932,7 @@ static void lmc_softreset (lmc_softc_t * sc->lmc_rxring[i].status = 0x80000000; /* used to be PKT_BUF_SZ now uses skb since we lose some to head room */ - sc->lmc_rxring[i].length = skb->end - skb->data; + sc->lmc_rxring[i].length = skb_tailroom(skb); /* use to be tail which is dumb since you're thinking why write * to the end of the packj,et but since there's nothing there tail == data diff -puN drivers/net/wan/pc300_drv.c~git-net drivers/net/wan/pc300_drv.c --- a/drivers/net/wan/pc300_drv.c~git-net +++ a/drivers/net/wan/pc300_drv.c @@ -1755,17 +1755,17 @@ cpc_trace(struct net_device *dev, struct skb->dev = dev; skb->protocol = htons(ETH_P_CUST); - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); skb->pkt_type = PACKET_HOST; skb->len = 10 + skb_main->len; - memcpy(skb->data, dev->name, 5); + skb_copy_to_linear_data(skb, dev->name, 5); skb->data[5] = '['; skb->data[6] = rx_tx; skb->data[7] = ']'; skb->data[8] = ':'; skb->data[9] = ' '; - memcpy(&skb->data[10], skb_main->data, skb_main->len); + skb_copy_from_linear_data(skb_main, &skb->data[10], skb_main->len); netif_rx(skb); } diff -puN drivers/net/wan/pc300_tty.c~git-net drivers/net/wan/pc300_tty.c --- a/drivers/net/wan/pc300_tty.c~git-net +++ a/drivers/net/wan/pc300_tty.c @@ -1003,17 +1003,17 @@ static void cpc_tty_trace(pc300dev_t *de skb_put (skb, 10 + len); skb->dev = dev->dev; skb->protocol = htons(ETH_P_CUST); - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); skb->pkt_type = PACKET_HOST; skb->len = 10 + len; - memcpy(skb->data,dev->dev->name,5); + skb_copy_to_linear_data(skb, dev->dev->name, 5); skb->data[5] = '['; skb->data[6] = rxtx; skb->data[7] = ']'; skb->data[8] = ':'; skb->data[9] = ' '; - memcpy(&skb->data[10], buf, len); + skb_copy_to_linear_data_offset(skb, 10, buf, len); netif_rx(skb); } diff -puN drivers/net/wan/sbni.c~git-net drivers/net/wan/sbni.c --- a/drivers/net/wan/sbni.c~git-net +++ a/drivers/net/wan/sbni.c @@ -999,11 +999,6 @@ get_rx_buf( struct net_device *dev ) if( !skb ) return NULL; -#ifdef CONFIG_SBNI_MULTILINE - skb->dev = ((struct net_local *) dev->priv)->master; -#else - skb->dev = dev; -#endif skb_reserve( skb, 2 ); /* Align IP on longword boundaries */ return skb; } diff -puN drivers/net/wan/sealevel.c~git-net drivers/net/wan/sealevel.c --- a/drivers/net/wan/sealevel.c~git-net +++ a/drivers/net/wan/sealevel.c @@ -61,7 +61,7 @@ static void sealevel_input(struct z8530_ /* Drop the CRC - it's not a good idea to try and negotiate it ;) */ skb_trim(skb, skb->len-2); skb->protocol=htons(ETH_P_WAN_PPP); - skb->mac.raw=skb->data; + skb_reset_mac_header(skb); skb->dev=c->netdevice; /* * Send it to the PPP layer. We don't have time to process diff -puN drivers/net/wan/syncppp.c~git-net drivers/net/wan/syncppp.c --- a/drivers/net/wan/syncppp.c~git-net +++ a/drivers/net/wan/syncppp.c @@ -227,7 +227,7 @@ static void sppp_input (struct net_devic unsigned long flags; skb->dev=dev; - skb->mac.raw=skb->data; + skb_reset_mac_header(skb); if (dev->flags & IFF_RUNNING) { diff -puN drivers/net/wan/z85230.c~git-net drivers/net/wan/z85230.c --- a/drivers/net/wan/z85230.c~git-net +++ a/drivers/net/wan/z85230.c @@ -1656,7 +1656,7 @@ static void z8530_rx_done(struct z8530_c else { skb_put(skb, ct); - memcpy(skb->data, rxb, ct); + skb_copy_to_linear_data(skb, rxb, ct); c->stats.rx_packets++; c->stats.rx_bytes+=ct; } @@ -1782,7 +1782,7 @@ int z8530_queue_xmit(struct z8530_channe */ c->tx_next_ptr=c->tx_dma_buf[c->tx_dma_used]; c->tx_dma_used^=1; /* Flip temp buffer */ - memcpy(c->tx_next_ptr, skb->data, skb->len); + skb_copy_from_linear_data(skb, c->tx_next_ptr, skb->len); } else c->tx_next_ptr=skb->data; diff -puN drivers/net/wireless/Kconfig~git-net drivers/net/wireless/Kconfig --- a/drivers/net/wireless/Kconfig~git-net +++ a/drivers/net/wireless/Kconfig @@ -2,47 +2,21 @@ # Wireless LAN device configuration # -menu "Wireless LAN (non-hamradio)" - depends on NETDEVICES +menu "Wireless LAN" -config NET_RADIO - bool "Wireless LAN drivers (non-hamradio) & Wireless Extensions" - select WIRELESS_EXT +config WLAN_PRE80211 + bool "Wireless LAN (pre-802.11)" + depends on NETDEVICES ---help--- - Support for wireless LANs and everything having to do with radio, - but not with amateur radio or FM broadcasting. + Say Y if you have any pre-802.11 wireless LAN hardware. - Saying Y here also enables the Wireless Extensions (creates - /proc/net/wireless and enables iwconfig access). The Wireless - Extension is a generic API allowing a driver to expose to the user - space configuration and statistics specific to common Wireless LANs. - The beauty of it is that a single set of tool can support all the - variations of Wireless LANs, regardless of their type (as long as - the driver supports Wireless Extension). Another advantage is that - these parameters may be changed on the fly without restarting the - driver (or Linux). If you wish to use Wireless Extensions with - wireless PCMCIA (PC-) cards, you need to say Y here; you can fetch - the tools from - . - -config NET_WIRELESS_RTNETLINK - bool "Wireless Extension API over RtNetlink" - depends on NET_RADIO - ---help--- - Support the Wireless Extension API over the RtNetlink socket - in addition to the traditional ioctl interface (selected above). - - For now, few tools use this facility, but it might grow in the - future. The only downside is that it adds 4.5 kB to your kernel. - -# Note : the cards are obsolete (can't buy them anymore), but the drivers -# are not, as people are still using them... -comment "Obsolete Wireless cards support (pre-802.11)" - depends on NET_RADIO && (INET || ISA || PCMCIA) + This option does not affect the kernel build, it only + let's you choose drivers. config STRIP tristate "STRIP (Metricom starmode radio IP)" - depends on NET_RADIO && INET + depends on INET && WLAN_PRE80211 + select WIRELESS_EXT ---help--- Say Y if you have a Metricom radio and intend to use Starmode Radio IP. STRIP is a radio protocol developed for the MosquitoNet project @@ -65,7 +39,8 @@ config STRIP config ARLAN tristate "Aironet Arlan 655 & IC2200 DS support" - depends on NET_RADIO && ISA && !64BIT + depends on ISA && !64BIT && WLAN_PRE80211 + select WIRELESS_EXT ---help--- Aironet makes Arlan, a class of wireless LAN adapters. These use the www.Telxon.com chip, which is also used on several similar cards. @@ -80,7 +55,8 @@ config ARLAN config WAVELAN tristate "AT&T/Lucent old WaveLAN & DEC RoamAbout DS ISA support" - depends on NET_RADIO && ISA + depends on ISA && WLAN_PRE80211 + select WIRELESS_EXT ---help--- The Lucent WaveLAN (formerly NCR and AT&T; or DEC RoamAbout DS) is a Radio LAN (wireless Ethernet-like Local Area Network) using the @@ -107,7 +83,8 @@ config WAVELAN config PCMCIA_WAVELAN tristate "AT&T/Lucent old WaveLAN Pcmcia wireless support" - depends on NET_RADIO && PCMCIA + depends on PCMCIA && WLAN_PRE80211 + select WIRELESS_EXT help Say Y here if you intend to attach an AT&T/Lucent Wavelan PCMCIA (PC-card) wireless Ethernet networking card to your computer. This @@ -118,7 +95,8 @@ config PCMCIA_WAVELAN config PCMCIA_NETWAVE tristate "Xircom Netwave AirSurfer Pcmcia wireless support" - depends on NET_RADIO && PCMCIA + depends on PCMCIA && WLAN_PRE80211 + select WIRELESS_EXT help Say Y here if you intend to attach this type of PCMCIA (PC-card) wireless Ethernet networking card to your computer. @@ -126,12 +104,20 @@ config PCMCIA_NETWAVE To compile this driver as a module, choose M here: the module will be called netwave_cs. If unsure, say N. -comment "Wireless 802.11 Frequency Hopping cards support" - depends on NET_RADIO && PCMCIA + +config WLAN_80211 + bool "Wireless LAN (IEEE 802.11)" + depends on NETDEVICES + ---help--- + Say Y if you have any 802.11 wireless LAN hardware. + + This option does not affect the kernel build, it only + let's you choose drivers. config PCMCIA_RAYCS tristate "Aviator/Raytheon 2.4MHz wireless support" - depends on NET_RADIO && PCMCIA + depends on PCMCIA && WLAN_80211 + select WIRELESS_EXT ---help--- Say Y here if you intend to attach an Aviator/Raytheon PCMCIA (PC-card) wireless Ethernet networking card to your computer. @@ -141,12 +127,10 @@ config PCMCIA_RAYCS To compile this driver as a module, choose M here: the module will be called ray_cs. If unsure, say N. -comment "Wireless 802.11b ISA/PCI cards support" - depends on NET_RADIO && (ISA || PCI || PPC_PMAC || PCMCIA) - config IPW2100 tristate "Intel PRO/Wireless 2100 Network Connection" - depends on NET_RADIO && PCI + depends on PCI && WLAN_80211 + select WIRELESS_EXT select FW_LOADER select IEEE80211 ---help--- @@ -200,7 +184,8 @@ config IPW2100_DEBUG config IPW2200 tristate "Intel PRO/Wireless 2200BG and 2915ABG Network Connection" - depends on NET_RADIO && PCI + depends on PCI && WLAN_80211 + select WIRELESS_EXT select FW_LOADER select IEEE80211 ---help--- @@ -295,7 +280,8 @@ config LIBERTAS_USB_DEBUG config AIRO tristate "Cisco/Aironet 34X/35X/4500/4800 ISA and PCI cards" - depends on NET_RADIO && ISA_DMA_API && (PCI || BROKEN) + depends on ISA_DMA_API && WLAN_80211 && (PCI || BROKEN) + select WIRELESS_EXT select CRYPTO ---help--- This is the standard Linux driver to support Cisco/Aironet ISA and @@ -312,7 +298,8 @@ config AIRO config HERMES tristate "Hermes chipset 802.11b support (Orinoco/Prism2/Symbol)" - depends on NET_RADIO && (PPC_PMAC || PCI || PCMCIA) + depends on (PPC_PMAC || PCI || PCMCIA) && WLAN_80211 + select WIRELESS_EXT ---help--- A driver for 802.11b wireless cards based on the "Hermes" or Intersil HFA384x (Prism 2) MAC controller. This includes the vast @@ -386,7 +373,8 @@ config PCI_HERMES config ATMEL tristate "Atmel at76c50x chipset 802.11b support" - depends on NET_RADIO && (PCI || PCMCIA) + depends on (PCI || PCMCIA) && WLAN_80211 + select WIRELESS_EXT select FW_LOADER select CRC32 ---help--- @@ -407,13 +395,9 @@ config PCI_ATMEL Enable support for PCI and mini-PCI cards containing the Atmel at76c506 chip. -# If Pcmcia is compiled in, offer Pcmcia cards... -comment "Wireless 802.11b Pcmcia/Cardbus cards support" - depends on NET_RADIO && PCMCIA - config PCMCIA_HERMES tristate "Hermes PCMCIA card support" - depends on NET_RADIO && PCMCIA && HERMES + depends on PCMCIA && HERMES ---help--- A driver for "Hermes" chipset based PCMCIA wireless adaptors, such as the Lucent WavelanIEEE/Orinoco cards and their OEM (Cabletron/ @@ -433,7 +417,7 @@ config PCMCIA_HERMES config PCMCIA_SPECTRUM tristate "Symbol Spectrum24 Trilogy PCMCIA card support" - depends on NET_RADIO && PCMCIA && HERMES + depends on PCMCIA && HERMES select FW_LOADER ---help--- @@ -447,7 +431,8 @@ config PCMCIA_SPECTRUM config AIRO_CS tristate "Cisco/Aironet 34X/35X/4500/4800 PCMCIA cards" - depends on NET_RADIO && PCMCIA && (BROKEN || !M32R) + depends on PCMCIA && (BROKEN || !M32R) && WLAN_80211 + select WIRELESS_EXT select CRYPTO select CRYPTO_AES ---help--- @@ -471,7 +456,8 @@ config AIRO_CS config PCMCIA_ATMEL tristate "Atmel at76c502/at76c504 PCMCIA cards" - depends on NET_RADIO && ATMEL && PCMCIA + depends on ATMEL && PCMCIA + select WIRELESS_EXT select FW_LOADER select CRC32 ---help--- @@ -480,17 +466,17 @@ config PCMCIA_ATMEL config PCMCIA_WL3501 tristate "Planet WL3501 PCMCIA cards" - depends on NET_RADIO && EXPERIMENTAL && PCMCIA + depends on EXPERIMENTAL && PCMCIA && WLAN_80211 + select WIRELESS_EXT ---help--- A driver for WL3501 PCMCIA 802.11 wireless cards made by Planet. It has basic support for Linux wireless extensions and initial micro support for ethtool. -comment "Prism GT/Duette 802.11(a/b/g) PCI/Cardbus support" - depends on NET_RADIO && PCI config PRISM54 tristate 'Intersil Prism GT/Duette/Indigo PCI/Cardbus' - depends on PCI && NET_RADIO && EXPERIMENTAL + depends on PCI && EXPERIMENTAL && WLAN_80211 + select WIRELESS_EXT select FW_LOADER ---help--- Enable PCI and Cardbus support for the following chipset based cards: @@ -536,7 +522,8 @@ config PRISM54 config USB_ZD1201 tristate "USB ZD1201 based Wireless device support" - depends on USB && NET_RADIO + depends on USB && WLAN_80211 + select WIRELESS_EXT select FW_LOADER ---help--- Say Y if you want to use wireless LAN adapters based on the ZyDAS @@ -555,11 +542,4 @@ source "drivers/net/wireless/hostap/Kcon source "drivers/net/wireless/bcm43xx/Kconfig" source "drivers/net/wireless/zd1211rw/Kconfig" -# yes, this works even when no drivers are selected -config NET_WIRELESS - bool - depends on NET_RADIO && (ISA || PCI || PPC_PMAC || PCMCIA) - default y - endmenu - diff -puN drivers/net/wireless/airo.c~git-net drivers/net/wireless/airo.c --- a/drivers/net/wireless/airo.c~git-net +++ a/drivers/net/wireless/airo.c @@ -2456,7 +2456,7 @@ EXPORT_SYMBOL(stop_airo_card); static int wll_header_parse(struct sk_buff *skb, unsigned char *haddr) { - memcpy(haddr, skb->mac.raw + 10, ETH_ALEN); + memcpy(haddr, skb_mac_header(skb) + 10, ETH_ALEN); return ETH_ALEN; } @@ -3418,14 +3418,12 @@ badrx: OUT4500( apriv, EVACK, EV_RX); if (test_bit(FLAG_802_11, &apriv->flags)) { - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); skb->pkt_type = PACKET_OTHERHOST; skb->dev = apriv->wifidev; skb->protocol = htons(ETH_P_802_2); - } else { - skb->dev = dev; + } else skb->protocol = eth_type_trans(skb,dev); - } skb->dev->last_rx = jiffies; skb->ip_summed = CHECKSUM_NONE; @@ -3648,7 +3646,6 @@ badmic: } #endif /* WIRELESS_SPY */ - skb->dev = ai->dev; skb->ip_summed = CHECKSUM_NONE; skb->protocol = eth_type_trans(skb, ai->dev); skb->dev->last_rx = jiffies; @@ -3756,7 +3753,7 @@ void mpi_receive_802_11 (struct airo_inf wireless_spy_update(ai->dev, sa, &wstats); } #endif /* IW_WIRELESS_SPY */ - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); skb->pkt_type = PACKET_OTHERHOST; skb->dev = ai->wifidev; skb->protocol = htons(ETH_P_802_2); diff -puN drivers/net/wireless/arlan-main.c~git-net drivers/net/wireless/arlan-main.c --- a/drivers/net/wireless/arlan-main.c~git-net +++ a/drivers/net/wireless/arlan-main.c @@ -1500,7 +1500,6 @@ static void arlan_rx_interrupt(struct ne break; } skb_reserve(skb, 2); - skb->dev = dev; skbtmp = skb_put(skb, pkt_len); memcpy_fromio(skbtmp + ARLAN_FAKE_HDR_LEN, ((char __iomem *) arlan) + rxOffset, pkt_len - ARLAN_FAKE_HDR_LEN); diff -puN drivers/net/wireless/atmel.c~git-net drivers/net/wireless/atmel.c --- a/drivers/net/wireless/atmel.c~git-net +++ a/drivers/net/wireless/atmel.c @@ -827,14 +827,14 @@ static int start_tx(struct sk_buff *skb, if (priv->wep_is_on) frame_ctl |= IEEE80211_FCTL_PROTECTED; if (priv->operating_mode == IW_MODE_ADHOC) { - memcpy(&header.addr1, skb->data, 6); + skb_copy_from_linear_data(skb, &header.addr1, 6); memcpy(&header.addr2, dev->dev_addr, 6); memcpy(&header.addr3, priv->BSSID, 6); } else { frame_ctl |= IEEE80211_FCTL_TODS; memcpy(&header.addr1, priv->CurrentBSSID, 6); memcpy(&header.addr2, dev->dev_addr, 6); - memcpy(&header.addr3, skb->data, 6); + skb_copy_from_linear_data(skb, &header.addr3, 6); } if (priv->use_wpa) @@ -920,7 +920,6 @@ static void fast_rx_path(struct atmel_pr memcpy(&skbp[6], header->addr2, 6); /* source address */ priv->dev->last_rx = jiffies; - skb->dev = priv->dev; skb->protocol = eth_type_trans(skb, priv->dev); skb->ip_summed = CHECKSUM_NONE; netif_rx(skb); @@ -1028,7 +1027,6 @@ static void frag_rx_path(struct atmel_pr priv->rx_buf, priv->frag_len + 12); priv->dev->last_rx = jiffies; - skb->dev = priv->dev; skb->protocol = eth_type_trans(skb, priv->dev); skb->ip_summed = CHECKSUM_NONE; netif_rx(skb); diff -puN drivers/net/wireless/bcm43xx/Kconfig~git-net drivers/net/wireless/bcm43xx/Kconfig --- a/drivers/net/wireless/bcm43xx/Kconfig~git-net +++ a/drivers/net/wireless/bcm43xx/Kconfig @@ -1,6 +1,7 @@ config BCM43XX tristate "Broadcom BCM43xx wireless support" - depends on PCI && IEEE80211 && IEEE80211_SOFTMAC && NET_RADIO && EXPERIMENTAL + depends on PCI && IEEE80211 && IEEE80211_SOFTMAC && WLAN_80211 && EXPERIMENTAL + select WIRELESS_EXT select FW_LOADER select HW_RANDOM ---help--- diff -puN drivers/net/wireless/bcm43xx/bcm43xx_dma.c~git-net drivers/net/wireless/bcm43xx/bcm43xx_dma.c --- a/drivers/net/wireless/bcm43xx/bcm43xx_dma.c~git-net +++ a/drivers/net/wireless/bcm43xx/bcm43xx_dma.c @@ -998,7 +998,8 @@ static void dma_tx_fragment(struct bcm43 assert(0); return; } - memcpy(skb_put(bounce_skb, skb->len), skb->data, skb->len); + skb_copy_from_linear_data(skb, skb_put(bounce_skb, skb->len), + skb->len); dev_kfree_skb_any(skb); skb = bounce_skb; } diff -puN drivers/net/wireless/hostap/Kconfig~git-net drivers/net/wireless/hostap/Kconfig --- a/drivers/net/wireless/hostap/Kconfig~git-net +++ a/drivers/net/wireless/hostap/Kconfig @@ -1,6 +1,7 @@ config HOSTAP tristate "IEEE 802.11 for Host AP (Prism2/2.5/3 and WEP/TKIP/CCMP)" - depends on NET_RADIO + depends on WLAN_80211 + select WIRELESS_EXT select IEEE80211 select IEEE80211_CRYPT_WEP ---help--- diff -puN drivers/net/wireless/hostap/hostap_80211_rx.c~git-net drivers/net/wireless/hostap/hostap_80211_rx.c --- a/drivers/net/wireless/hostap/hostap_80211_rx.c~git-net +++ a/drivers/net/wireless/hostap/hostap_80211_rx.c @@ -167,7 +167,7 @@ hdr->f.status = s; hdr->f.len = l; hdr-> ret = skb->len - phdrlen; skb->dev = dev; - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); skb_pull(skb, hdrlen); if (prism_header) skb_pull(skb, phdrlen); @@ -933,12 +933,14 @@ void hostap_80211_rx(struct net_device * if (frag == 0) { /* copy first fragment (including full headers) into * beginning of the fragment cache skb */ - memcpy(skb_put(frag_skb, flen), skb->data, flen); + skb_copy_from_linear_data(skb, skb_put(frag_skb, flen), + flen); } else { /* append frame payload to the end of the fragment * cache skb */ - memcpy(skb_put(frag_skb, flen), skb->data + hdrlen, - flen); + skb_copy_from_linear_data_offset(skb, hdrlen, + skb_put(frag_skb, + flen), flen); } dev_kfree_skb(skb); skb = NULL; @@ -1044,8 +1046,9 @@ void hostap_80211_rx(struct net_device * skb->len >= ETH_HLEN + ETH_ALEN) { /* Non-standard frame: get addr4 from its bogus location after * the payload */ - memcpy(skb->data + ETH_ALEN, - skb->data + skb->len - ETH_ALEN, ETH_ALEN); + skb_copy_from_linear_data_offset(skb, skb->len - ETH_ALEN, + skb->data + ETH_ALEN, + ETH_ALEN); skb_trim(skb, skb->len - ETH_ALEN); } @@ -1073,17 +1076,17 @@ void hostap_80211_rx(struct net_device * if (skb2 != NULL) { /* send to wireless media */ - skb2->protocol = __constant_htons(ETH_P_802_3); - skb2->mac.raw = skb2->nh.raw = skb2->data; - /* skb2->nh.raw = skb2->data + ETH_HLEN; */ skb2->dev = dev; + skb2->protocol = __constant_htons(ETH_P_802_3); + skb_reset_mac_header(skb2); + skb_reset_network_header(skb2); + /* skb2->network_header += ETH_HLEN; */ dev_queue_xmit(skb2); } if (skb) { skb->protocol = eth_type_trans(skb, dev); memset(skb->cb, 0, sizeof(skb->cb)); - skb->dev = dev; netif_rx(skb); } diff -puN drivers/net/wireless/hostap/hostap_80211_tx.c~git-net drivers/net/wireless/hostap/hostap_80211_tx.c --- a/drivers/net/wireless/hostap/hostap_80211_tx.c~git-net +++ a/drivers/net/wireless/hostap/hostap_80211_tx.c @@ -146,7 +146,8 @@ int hostap_data_start_xmit(struct sk_buf fc |= IEEE80211_FCTL_FROMDS | IEEE80211_FCTL_TODS; /* From&To DS: Addr1 = RA, Addr2 = TA, Addr3 = DA, * Addr4 = SA */ - memcpy(&hdr.addr4, skb->data + ETH_ALEN, ETH_ALEN); + skb_copy_from_linear_data_offset(skb, ETH_ALEN, + &hdr.addr4, ETH_ALEN); hdr_len += ETH_ALEN; } else { /* bogus 4-addr format to workaround Prism2 station @@ -159,7 +160,8 @@ int hostap_data_start_xmit(struct sk_buf /* SA from skb->data + ETH_ALEN will be added after * frame payload; use hdr.addr4 as a temporary buffer */ - memcpy(&hdr.addr4, skb->data + ETH_ALEN, ETH_ALEN); + skb_copy_from_linear_data_offset(skb, ETH_ALEN, + &hdr.addr4, ETH_ALEN); need_tailroom += ETH_ALEN; } @@ -174,24 +176,27 @@ int hostap_data_start_xmit(struct sk_buf else memcpy(&hdr.addr1, local->bssid, ETH_ALEN); memcpy(&hdr.addr2, dev->dev_addr, ETH_ALEN); - memcpy(&hdr.addr3, skb->data, ETH_ALEN); + skb_copy_from_linear_data(skb, &hdr.addr3, ETH_ALEN); } else if (local->iw_mode == IW_MODE_MASTER && !to_assoc_ap) { fc |= IEEE80211_FCTL_FROMDS; /* From DS: Addr1 = DA, Addr2 = BSSID, Addr3 = SA */ - memcpy(&hdr.addr1, skb->data, ETH_ALEN); + skb_copy_from_linear_data(skb, &hdr.addr1, ETH_ALEN); memcpy(&hdr.addr2, dev->dev_addr, ETH_ALEN); - memcpy(&hdr.addr3, skb->data + ETH_ALEN, ETH_ALEN); + skb_copy_from_linear_data_offset(skb, ETH_ALEN, &hdr.addr3, + ETH_ALEN); } else if (local->iw_mode == IW_MODE_INFRA || to_assoc_ap) { fc |= IEEE80211_FCTL_TODS; /* To DS: Addr1 = BSSID, Addr2 = SA, Addr3 = DA */ memcpy(&hdr.addr1, to_assoc_ap ? local->assoc_ap_addr : local->bssid, ETH_ALEN); - memcpy(&hdr.addr2, skb->data + ETH_ALEN, ETH_ALEN); - memcpy(&hdr.addr3, skb->data, ETH_ALEN); + skb_copy_from_linear_data_offset(skb, ETH_ALEN, &hdr.addr2, + ETH_ALEN); + skb_copy_from_linear_data(skb, &hdr.addr3, ETH_ALEN); } else if (local->iw_mode == IW_MODE_ADHOC) { /* not From/To DS: Addr1 = DA, Addr2 = SA, Addr3 = BSSID */ - memcpy(&hdr.addr1, skb->data, ETH_ALEN); - memcpy(&hdr.addr2, skb->data + ETH_ALEN, ETH_ALEN); + skb_copy_from_linear_data(skb, &hdr.addr1, ETH_ALEN); + skb_copy_from_linear_data_offset(skb, ETH_ALEN, &hdr.addr2, + ETH_ALEN); memcpy(&hdr.addr3, local->bssid, ETH_ALEN); } @@ -237,7 +242,7 @@ int hostap_data_start_xmit(struct sk_buf iface->stats.tx_packets++; iface->stats.tx_bytes += skb->len; - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); meta = (struct hostap_skb_tx_data *) skb->cb; memset(meta, 0, sizeof(*meta)); meta->magic = HOSTAP_SKB_TX_DATA_MAGIC; diff -puN drivers/net/wireless/hostap/hostap_ap.c~git-net drivers/net/wireless/hostap/hostap_ap.c --- a/drivers/net/wireless/hostap/hostap_ap.c~git-net +++ a/drivers/net/wireless/hostap/hostap_ap.c @@ -982,7 +982,8 @@ static void prism2_send_mgmt(struct net_ meta->tx_cb_idx = tx_cb_idx; skb->dev = dev; - skb->mac.raw = skb->nh.raw = skb->data; + skb_reset_mac_header(skb); + skb_reset_network_header(skb); dev_queue_xmit(skb); } #endif /* PRISM2_NO_KERNEL_IEEE80211_MGMT */ @@ -1276,8 +1277,8 @@ static char * ap_auth_make_challenge(str return NULL; } - memcpy(tmpbuf, skb->data + ap->crypt->extra_mpdu_prefix_len, - WLAN_AUTH_CHALLENGE_LEN); + skb_copy_from_linear_data_offset(skb, ap->crypt->extra_mpdu_prefix_len, + tmpbuf, WLAN_AUTH_CHALLENGE_LEN); dev_kfree_skb(skb); return tmpbuf; diff -puN drivers/net/wireless/hostap/hostap_hw.c~git-net drivers/net/wireless/hostap/hostap_hw.c --- a/drivers/net/wireless/hostap/hostap_hw.c~git-net +++ a/drivers/net/wireless/hostap/hostap_hw.c @@ -1838,13 +1838,14 @@ static int prism2_tx_80211(struct sk_buf /* skb->data starts with txdesc->frame_control */ hdr_len = 24; - memcpy(&txdesc.frame_control, skb->data, hdr_len); + skb_copy_from_linear_data(skb, &txdesc.frame_control, hdr_len); fc = le16_to_cpu(txdesc.frame_control); if (WLAN_FC_GET_TYPE(fc) == IEEE80211_FTYPE_DATA && (fc & IEEE80211_FCTL_FROMDS) && (fc & IEEE80211_FCTL_TODS) && skb->len >= 30) { /* Addr4 */ - memcpy(txdesc.addr4, skb->data + hdr_len, ETH_ALEN); + skb_copy_from_linear_data_offset(skb, hdr_len, txdesc.addr4, + ETH_ALEN); hdr_len += ETH_ALEN; } @@ -2217,7 +2218,7 @@ static void hostap_tx_callback(local_inf memcpy(skb_put(skb, len), payload, len); skb->dev = local->dev; - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); cb->func(skb, ok, cb->data); } diff -puN drivers/net/wireless/hostap/hostap_main.c~git-net drivers/net/wireless/hostap/hostap_main.c --- a/drivers/net/wireless/hostap/hostap_main.c~git-net +++ a/drivers/net/wireless/hostap/hostap_main.c @@ -590,20 +590,20 @@ void hostap_dump_tx_header(const char *n int hostap_80211_header_parse(struct sk_buff *skb, unsigned char *haddr) { - memcpy(haddr, skb->mac.raw + 10, ETH_ALEN); /* addr2 */ + memcpy(haddr, skb_mac_header(skb) + 10, ETH_ALEN); /* addr2 */ return ETH_ALEN; } int hostap_80211_prism_header_parse(struct sk_buff *skb, unsigned char *haddr) { - if (*(u32 *)skb->mac.raw == LWNG_CAP_DID_BASE) { - memcpy(haddr, skb->mac.raw + - sizeof(struct linux_wlan_ng_prism_hdr) + 10, + const unsigned char *mac = skb_mac_header(skb); + + if (*(u32 *)mac == LWNG_CAP_DID_BASE) { + memcpy(haddr, mac + sizeof(struct linux_wlan_ng_prism_hdr) + 10, ETH_ALEN); /* addr2 */ - } else { /* (*(u32 *)skb->mac.raw == htonl(LWNG_CAPHDR_VERSION)) */ - memcpy(haddr, skb->mac.raw + - sizeof(struct linux_wlan_ng_cap_hdr) + 10, + } else { /* (*(u32 *)mac == htonl(LWNG_CAPHDR_VERSION)) */ + memcpy(haddr, mac + sizeof(struct linux_wlan_ng_cap_hdr) + 10, ETH_ALEN); /* addr2 */ } return ETH_ALEN; @@ -1063,7 +1063,8 @@ int prism2_sta_send_mgmt(local_info_t *l meta->iface = netdev_priv(dev); skb->dev = dev; - skb->mac.raw = skb->nh.raw = skb->data; + skb_reset_mac_header(skb); + skb_reset_network_header(skb); dev_queue_xmit(skb); return 0; diff -puN drivers/net/wireless/ipw2100.c~git-net drivers/net/wireless/ipw2100.c --- a/drivers/net/wireless/ipw2100.c~git-net +++ a/drivers/net/wireless/ipw2100.c @@ -2416,8 +2416,9 @@ static void isr_rx(struct ipw2100_priv * #ifdef IPW2100_RX_DEBUG /* Make a copy of the frame so we can dump it to the logs if * ieee80211_rx fails */ - memcpy(packet_data, packet->skb->data, - min_t(u32, status->frame_size, IPW_RX_NIC_BUFFER_LENGTH)); + skb_copy_from_linear_data(packet->skb, packet_data, + min_t(u32, status->frame_size, + IPW_RX_NIC_BUFFER_LENGTH)); #endif if (!ieee80211_rx(priv->ieee, packet->skb, stats)) { diff -puN drivers/net/wireless/ipw2200.c~git-net drivers/net/wireless/ipw2200.c --- a/drivers/net/wireless/ipw2200.c~git-net +++ a/drivers/net/wireless/ipw2200.c @@ -8179,7 +8179,7 @@ static void ipw_handle_mgmt_packet(struc skb->dev = priv->ieee->dev; /* Point raw at the ieee80211_stats */ - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); skb->pkt_type = PACKET_OTHERHOST; skb->protocol = __constant_htons(ETH_P_80211_STATS); @@ -10401,7 +10401,7 @@ static void ipw_handle_promiscuous_tx(st rt_hdr->it_len = dst->len; - memcpy(skb_put(dst, len), src->data, len); + skb_copy_from_linear_data(src, skb_put(dst, len), len); if (!ieee80211_rx(priv->prom_priv->ieee, dst, &dummystats)) dev_kfree_skb_any(dst); diff -puN drivers/net/wireless/netwave_cs.c~git-net drivers/net/wireless/netwave_cs.c --- a/drivers/net/wireless/netwave_cs.c~git-net +++ a/drivers/net/wireless/netwave_cs.c @@ -1283,7 +1283,6 @@ static int netwave_rx(struct net_device skb_reserve( skb, 2); /* Align IP on 16 byte */ skb_put( skb, rcvLen); - skb->dev = dev; /* Copy packet fragments to the skb data area */ ptr = (u_char*) skb->data; diff -puN drivers/net/wireless/orinoco.c~git-net drivers/net/wireless/orinoco.c --- a/drivers/net/wireless/orinoco.c~git-net +++ a/drivers/net/wireless/orinoco.c @@ -689,7 +689,7 @@ static void orinoco_stat_gather(struct n /* Note : gcc will optimise the whole section away if * WIRELESS_SPY is not defined... - Jean II */ if (SPY_NUMBER(priv)) { - orinoco_spy_gather(dev, skb->mac.raw + ETH_ALEN, + orinoco_spy_gather(dev, skb_mac_header(skb) + ETH_ALEN, desc->signal, desc->silence); } } @@ -770,7 +770,7 @@ static void orinoco_rx_monitor(struct ne /* Copy the 802.11 header to the skb */ memcpy(skb_put(skb, hdrlen), &(desc->frame_ctl), hdrlen); - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); /* If any, copy the data from the card to the skb */ if (datalen > 0) { @@ -915,7 +915,6 @@ static void __orinoco_ev_rx(struct net_d memcpy(hdr->h_source, desc.addr2, ETH_ALEN); dev->last_rx = jiffies; - skb->dev = dev; skb->protocol = eth_type_trans(skb, dev); skb->ip_summed = CHECKSUM_NONE; if (fc & IEEE80211_FCTL_TODS) diff -puN drivers/net/wireless/prism54/islpci_eth.c~git-net drivers/net/wireless/prism54/islpci_eth.c --- a/drivers/net/wireless/prism54/islpci_eth.c~git-net +++ a/drivers/net/wireless/prism54/islpci_eth.c @@ -136,7 +136,7 @@ islpci_eth_transmit(struct sk_buff *skb, printk("islpci_eth_transmit:wds_mac\n"); #endif memmove(skb->data + 6, src, skb->len); - memcpy(skb->data, wds_mac, 6); + skb_copy_to_linear_data(skb, wds_mac, 6); } else { memmove(skb->data, src, skb->len); } @@ -162,13 +162,16 @@ islpci_eth_transmit(struct sk_buff *skb, skb_put(newskb, init_wds ? skb->len + 6 : skb->len); if (init_wds) { - memcpy(newskb->data + 6, skb->data, skb->len); - memcpy(newskb->data, wds_mac, 6); + skb_copy_from_linear_data(skb, + newskb->data + 6, + skb->len); + skb_copy_to_linear_data(newskb, wds_mac, 6); #ifdef ISLPCI_ETH_DEBUG printk("islpci_eth_transmit:wds_mac\n"); #endif } else - memcpy(newskb->data, skb->data, skb->len); + skb_copy_from_linear_data(skb, newskb->data, + skb->len); #if VERBOSE > SHOW_ERROR_MESSAGES DEBUG(SHOW_TRACING, "memcpy %p %p %i wds %i\n", @@ -303,7 +306,7 @@ islpci_monitor_rx(islpci_private *priv, skb_pull(*skb, sizeof (struct rfmon_header)); (*skb)->protocol = htons(ETH_P_802_2); - (*skb)->mac.raw = (*skb)->data; + skb_reset_mac_header(*skb); (*skb)->pkt_type = PACKET_OTHERHOST; return 0; @@ -374,10 +377,6 @@ islpci_eth_receive(islpci_private *priv) DEBUG(SHOW_BUFFER_CONTENTS, "\nrx %p ", skb->data); display_buffer((char *) skb->data, skb->len); #endif - - /* do some additional sk_buff and network layer parameters */ - skb->dev = ndev; - /* take care of monitor mode and spy monitoring. */ if (unlikely(priv->iw_mode == IW_MODE_MONITOR)) discard = islpci_monitor_rx(priv, &skb); @@ -398,8 +397,10 @@ islpci_eth_receive(islpci_private *priv) /* Update spy records */ wireless_spy_update(ndev, annex->addr2, &wstats); - memcpy(skb->data + sizeof (struct rfmon_header), - skb->data, 2 * ETH_ALEN); + skb_copy_from_linear_data(skb, + (skb->data + + sizeof(struct rfmon_header)), + 2 * ETH_ALEN); skb_pull(skb, sizeof (struct rfmon_header)); } skb->protocol = eth_type_trans(skb, ndev); diff -puN drivers/net/wireless/ray_cs.c~git-net drivers/net/wireless/ray_cs.c --- a/drivers/net/wireless/ray_cs.c~git-net +++ a/drivers/net/wireless/ray_cs.c @@ -2232,7 +2232,6 @@ static void rx_data(struct net_device *d return; } skb_reserve( skb, 2); /* Align IP on 16 byte (TBD check this)*/ - skb->dev = dev; DEBUG(4,"ray_cs rx_data total_len = %x, rx_len = %x\n",total_len,rx_len); @@ -2243,7 +2242,8 @@ static void rx_data(struct net_device *d rx_ptr += copy_from_rx_buff(local, rx_ptr, pkt_addr & RX_BUFF_END, rx_len); /* Get source address */ #ifdef WIRELESS_SPY - memcpy(linksrcaddr, ((struct mac_header *)skb->data)->addr_2, ETH_ALEN); + skb_copy_from_linear_data_offset(skb, offsetof(struct mac_header, addr_2), + linksrcaddr, ETH_ALEN); #endif /* Now, deal with encapsulation/translation/sniffer */ if (!sniffer) { diff -puN drivers/net/wireless/strip.c~git-net drivers/net/wireless/strip.c --- a/drivers/net/wireless/strip.c~git-net +++ a/drivers/net/wireless/strip.c @@ -2009,7 +2009,7 @@ static void deliver_packet(struct strip packetlen); skb->dev = get_strip_dev(strip_info); skb->protocol = header->protocol; - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); /* Having put a fake header on the front of the sk_buff for the */ /* benefit of tools like tcpdump, skb_pull now 'consumes' that */ diff -puN drivers/net/wireless/wavelan.c~git-net drivers/net/wireless/wavelan.c --- a/drivers/net/wireless/wavelan.c~git-net +++ a/drivers/net/wireless/wavelan.c @@ -2512,14 +2512,13 @@ wv_packet_read(struct net_device * dev, return; } - skb->dev = dev; - /* Copy the packet to the buffer. */ obram_read(ioaddr, buf_off, skb_put(skb, sksize), sksize); skb->protocol = eth_type_trans(skb, dev); #ifdef DEBUG_RX_INFO - wv_packet_info(skb->mac.raw, sksize, dev->name, "wv_packet_read"); + wv_packet_info(skb_mac_header(skb), sksize, dev->name, + "wv_packet_read"); #endif /* DEBUG_RX_INFO */ /* Statistics-gathering and associated stuff. @@ -2555,7 +2554,7 @@ wv_packet_read(struct net_device * dev, /* Spying stuff */ #ifdef IW_WIRELESS_SPY - wl_spy_gather(dev, skb->mac.raw + WAVELAN_ADDR_SIZE, + wl_spy_gather(dev, skb_mac_header(skb) + WAVELAN_ADDR_SIZE, stats); #endif /* IW_WIRELESS_SPY */ #ifdef HISTOGRAM @@ -2939,7 +2938,7 @@ static int wavelan_packet_xmit(struct sk * need to pad. Jean II */ if (skb->len < ETH_ZLEN) { memset(data, 0, ETH_ZLEN); - memcpy(data, skb->data, skb->len); + skb_copy_from_linear_data(skb, data, skb->len); /* Write packet on the card */ if(wv_packet_write(dev, data, ETH_ZLEN)) return 1; /* We failed */ diff -puN drivers/net/wireless/wavelan_cs.c~git-net drivers/net/wireless/wavelan_cs.c --- a/drivers/net/wireless/wavelan_cs.c~git-net +++ a/drivers/net/wireless/wavelan_cs.c @@ -2884,14 +2884,12 @@ wv_packet_read(struct net_device * dev, return; } - skb->dev = dev; - skb_reserve(skb, 2); fd_p = read_ringbuf(dev, fd_p, (char *) skb_put(skb, sksize), sksize); skb->protocol = eth_type_trans(skb, dev); #ifdef DEBUG_RX_INFO - wv_packet_info(skb->mac.raw, sksize, dev->name, "wv_packet_read"); + wv_packet_info(skb_mac_header(skb), sksize, dev->name, "wv_packet_read"); #endif /* DEBUG_RX_INFO */ /* Statistics gathering & stuff associated. @@ -2925,7 +2923,7 @@ wv_packet_read(struct net_device * dev, #endif /* WAVELAN_ROAMING */ #ifdef WIRELESS_SPY - wl_spy_gather(dev, skb->mac.raw + WAVELAN_ADDR_SIZE, stats); + wl_spy_gather(dev, skb_mac_header(skb) + WAVELAN_ADDR_SIZE, stats); #endif /* WIRELESS_SPY */ #ifdef HISTOGRAM wl_his_gather(dev, stats); diff -puN drivers/net/wireless/zd1201.c~git-net drivers/net/wireless/zd1201.c --- a/drivers/net/wireless/zd1201.c~git-net +++ a/drivers/net/wireless/zd1201.c @@ -327,7 +327,6 @@ static void zd1201_usbrx(struct urb *urb memcpy(skb_put(skb, 6), &data[datalen-8], 6); memcpy(skb_put(skb, 2), &data[datalen-24], 2); memcpy(skb_put(skb, len), data, len); - skb->dev = zd->dev; skb->dev->last_rx = jiffies; skb->protocol = eth_type_trans(skb, zd->dev); zd->stats.rx_packets++; @@ -385,7 +384,6 @@ static void zd1201_usbrx(struct urb *urb memcpy(skb_put(skb, 2), &data[6], 2); memcpy(skb_put(skb, len), data+8, len); } - skb->dev = zd->dev; skb->dev->last_rx = jiffies; skb->protocol = eth_type_trans(skb, zd->dev); zd->stats.rx_packets++; @@ -809,10 +807,10 @@ static int zd1201_hard_start_xmit(struct txbuf[4] = 0x00; txbuf[5] = 0x00; - memcpy(txbuf+6, skb->data+12, skb->len-12); + skb_copy_from_linear_data_offset(skb, 12, txbuf + 6, skb->len - 12); if (pad) txbuf[skb->len-12+6]=0; - memcpy(txbuf+skb->len-12+6+pad, skb->data, 12); + skb_copy_from_linear_data(skb, txbuf + skb->len - 12 + 6 + pad, 12); *(__be16*)&txbuf[skb->len+6+pad] = htons(skb->len-12+6); txbuf[txbuflen-1] = 0; diff -puN drivers/net/wireless/zd1211rw/Kconfig~git-net drivers/net/wireless/zd1211rw/Kconfig --- a/drivers/net/wireless/zd1211rw/Kconfig~git-net +++ a/drivers/net/wireless/zd1211rw/Kconfig @@ -1,6 +1,7 @@ config ZD1211RW tristate "ZyDAS ZD1211/ZD1211B USB-wireless support" - depends on USB && IEEE80211 && IEEE80211_SOFTMAC && NET_RADIO && EXPERIMENTAL + depends on USB && IEEE80211_SOFTMAC && WLAN_80211 && EXPERIMENTAL + select WIRELESS_EXT select FW_LOADER ---help--- This is an experimental driver for the ZyDAS ZD1211/ZD1211B wireless diff -puN drivers/net/yellowfin.c~git-net drivers/net/yellowfin.c --- a/drivers/net/yellowfin.c~git-net +++ a/drivers/net/yellowfin.c @@ -1137,7 +1137,6 @@ static int yellowfin_rx(struct net_devic skb = dev_alloc_skb(pkt_len + 2); if (skb == NULL) break; - skb->dev = dev; skb_reserve(skb, 2); /* 16 byte align the IP header */ eth_copy_and_sum(skb, rx_skb->data, pkt_len, 0); skb_put(skb, pkt_len); diff -puN drivers/net/znet.c~git-net drivers/net/znet.c --- a/drivers/net/znet.c~git-net +++ a/drivers/net/znet.c @@ -774,7 +774,6 @@ static void znet_rx(struct net_device *d znet->stats.rx_dropped++; break; } - skb->dev = dev; if (&znet->rx_cur[(pkt_len+1)>>1] > znet->rx_end) { int semi_cnt = (znet->rx_end - znet->rx_cur)<<1; diff -puN drivers/parisc/led.c~git-net drivers/parisc/led.c --- a/drivers/parisc/led.c~git-net +++ a/drivers/parisc/led.c @@ -372,9 +372,9 @@ static __inline__ int led_get_net_activi continue; if (LOOPBACK(in_dev->ifa_list->ifa_local)) continue; - if (!dev->get_stats) - continue; stats = dev->get_stats(dev); + if (!stats) + continue; rx_total += stats->rx_packets; tx_total += stats->tx_packets; } diff -puN drivers/s390/net/claw.c~git-net drivers/s390/net/claw.c --- a/drivers/s390/net/claw.c~git-net +++ a/drivers/s390/net/claw.c @@ -3525,8 +3525,8 @@ unpack_next: memcpy(skb_put(skb,len_of_data), privptr->p_mtc_envelope, len_of_data); - skb->mac.raw=skb->data; skb->dev=dev; + skb_reset_mac_header(skb); skb->protocol=htons(ETH_P_IP); skb->ip_summed=CHECKSUM_UNNECESSARY; privptr->stats.rx_packets++; diff -puN drivers/s390/net/ctcmain.c~git-net drivers/s390/net/ctcmain.c --- a/drivers/s390/net/ctcmain.c~git-net +++ a/drivers/s390/net/ctcmain.c @@ -455,7 +455,7 @@ ctc_unpack_skb(struct channel *ch, struc return; } skb_put(pskb, header->length); - pskb->mac.raw = pskb->data; + skb_reset_mac_header(pskb); len -= header->length; skb = dev_alloc_skb(pskb->len); if (!skb) { @@ -472,8 +472,9 @@ ctc_unpack_skb(struct channel *ch, struc privptr->stats.rx_dropped++; return; } - memcpy(skb_put(skb, pskb->len), pskb->data, pskb->len); - skb->mac.raw = skb->data; + skb_copy_from_linear_data(pskb, skb_put(skb, pskb->len), + pskb->len); + skb_reset_mac_header(skb); skb->dev = pskb->dev; skb->protocol = pskb->protocol; pskb->ip_summed = CHECKSUM_UNNECESSARY; @@ -706,7 +707,8 @@ ch_action_txdone(fsm_instance * fi, int spin_unlock(&ch->collect_lock); return; } - ch->trans_skb->tail = ch->trans_skb->data = ch->trans_skb_data; + ch->trans_skb->data = ch->trans_skb_data; + skb_reset_tail_pointer(ch->trans_skb); ch->trans_skb->len = 0; if (ch->prof.maxmulti < (ch->collect_len + 2)) ch->prof.maxmulti = ch->collect_len + 2; @@ -715,8 +717,9 @@ ch_action_txdone(fsm_instance * fi, int *((__u16 *) skb_put(ch->trans_skb, 2)) = ch->collect_len + 2; i = 0; while ((skb = skb_dequeue(&ch->collect_queue))) { - memcpy(skb_put(ch->trans_skb, skb->len), skb->data, - skb->len); + skb_copy_from_linear_data(skb, skb_put(ch->trans_skb, + skb->len), + skb->len); privptr->stats.tx_packets++; privptr->stats.tx_bytes += skb->len - LL_HEADER_LENGTH; atomic_dec(&skb->users); @@ -831,7 +834,8 @@ ch_action_rx(fsm_instance * fi, int even ctc_unpack_skb(ch, skb); } again: - skb->data = skb->tail = ch->trans_skb_data; + skb->data = ch->trans_skb_data; + skb_reset_tail_pointer(skb); skb->len = 0; if (ctc_checkalloc_buffer(ch, 1)) return; @@ -2226,7 +2230,8 @@ transmit_skb(struct channel *ch, struct * IDAL support in CTC is broken, so we have to * care about skb's above 2G ourselves. */ - hi = ((unsigned long) skb->tail + LL_HEADER_LENGTH) >> 31; + hi = ((unsigned long)skb_tail_pointer(skb) + + LL_HEADER_LENGTH) >> 31; if (hi) { nskb = alloc_skb(skb->len, GFP_ATOMIC | GFP_DMA); if (!nskb) { @@ -2262,11 +2267,12 @@ transmit_skb(struct channel *ch, struct return -EBUSY; } - ch->trans_skb->tail = ch->trans_skb->data; + skb_reset_tail_pointer(ch->trans_skb); ch->trans_skb->len = 0; ch->ccw[1].count = skb->len; - memcpy(skb_put(ch->trans_skb, skb->len), skb->data, - skb->len); + skb_copy_from_linear_data(skb, skb_put(ch->trans_skb, + skb->len), + skb->len); atomic_dec(&skb->users); dev_kfree_skb_irq(skb); ccw_idx = 0; diff -puN drivers/s390/net/lcs.c~git-net drivers/s390/net/lcs.c --- a/drivers/s390/net/lcs.c~git-net +++ a/drivers/s390/net/lcs.c @@ -1576,7 +1576,7 @@ __lcs_start_xmit(struct lcs_card *card, header->offset = card->tx_buffer->count; header->type = card->lan_type; header->slot = card->portno; - memcpy(header + 1, skb->data, skb->len); + skb_copy_from_linear_data(skb, header + 1, skb->len); spin_unlock(&card->lock); card->stats.tx_bytes += skb->len; card->stats.tx_packets++; @@ -1784,7 +1784,6 @@ lcs_get_skb(struct lcs_card *card, char card->stats.rx_dropped++; return; } - skb->dev = card->dev; memcpy(skb_put(skb, skb_len), skb_data, skb_len); skb->protocol = card->lan_type_trans(skb, card->dev); card->stats.rx_bytes += skb_len; diff -puN drivers/s390/net/netiucv.c~git-net drivers/s390/net/netiucv.c --- a/drivers/s390/net/netiucv.c~git-net +++ a/drivers/s390/net/netiucv.c @@ -635,7 +635,7 @@ static void netiucv_unpack_skb(struct iu return; } skb_put(pskb, header->next); - pskb->mac.raw = pskb->data; + skb_reset_mac_header(pskb); skb = dev_alloc_skb(pskb->len); if (!skb) { PRINT_WARN("%s Out of memory in netiucv_unpack_skb\n", @@ -645,8 +645,9 @@ static void netiucv_unpack_skb(struct iu privptr->stats.rx_dropped++; return; } - memcpy(skb_put(skb, pskb->len), pskb->data, pskb->len); - skb->mac.raw = skb->data; + skb_copy_from_linear_data(pskb, skb_put(skb, pskb->len), + pskb->len); + skb_reset_mac_header(skb); skb->dev = pskb->dev; skb->protocol = pskb->protocol; pskb->ip_summed = CHECKSUM_UNNECESSARY; @@ -689,7 +690,8 @@ static void conn_action_rx(fsm_instance msg->length, conn->max_buffsize); return; } - conn->rx_buff->data = conn->rx_buff->tail = conn->rx_buff->head; + conn->rx_buff->data = conn->rx_buff->head; + skb_reset_tail_pointer(conn->rx_buff); conn->rx_buff->len = 0; rc = iucv_message_receive(conn->path, msg, 0, conn->rx_buff->data, msg->length, NULL); @@ -735,14 +737,17 @@ static void conn_action_txdone(fsm_insta } } } - conn->tx_buff->data = conn->tx_buff->tail = conn->tx_buff->head; + conn->tx_buff->data = conn->tx_buff->head; + skb_reset_tail_pointer(conn->tx_buff); conn->tx_buff->len = 0; spin_lock_irqsave(&conn->collect_lock, saveflags); while ((skb = skb_dequeue(&conn->collect_queue))) { header.next = conn->tx_buff->len + skb->len + NETIUCV_HDRLEN; memcpy(skb_put(conn->tx_buff, NETIUCV_HDRLEN), &header, NETIUCV_HDRLEN); - memcpy(skb_put(conn->tx_buff, skb->len), skb->data, skb->len); + skb_copy_from_linear_data(skb, + skb_put(conn->tx_buff, skb->len), + skb->len); txbytes += skb->len; txpackets++; stat_maxcq++; @@ -1164,8 +1169,8 @@ static int netiucv_transmit_skb(struct i * Copy the skb to a new allocated skb in lowmem only if the * data is located above 2G in memory or tailroom is < 2. */ - unsigned long hi = - ((unsigned long)(skb->tail + NETIUCV_HDRLEN)) >> 31; + unsigned long hi = ((unsigned long)(skb_tail_pointer(skb) + + NETIUCV_HDRLEN)) >> 31; int copied = 0; if (hi || (skb_tailroom(skb) < 2)) { nskb = alloc_skb(skb->len + NETIUCV_HDRLEN + diff -puN drivers/s390/net/qeth_eddp.c~git-net drivers/s390/net/qeth_eddp.c --- a/drivers/s390/net/qeth_eddp.c~git-net +++ a/drivers/s390/net/qeth_eddp.c @@ -267,7 +267,8 @@ qeth_eddp_copy_data_tcp(char *dst, struc QETH_DBF_TEXT(trace, 5, "eddpcdtc"); if (skb_shinfo(eddp->skb)->nr_frags == 0) { - memcpy(dst, eddp->skb->data + eddp->skb_offset, len); + skb_copy_from_linear_data_offset(eddp->skb, eddp->skb_offset, + dst, len); *hcsum = csum_partial(eddp->skb->data + eddp->skb_offset, len, *hcsum); eddp->skb_offset += len; @@ -416,7 +417,7 @@ __qeth_eddp_fill_context_tcp(struct qeth eddp->skb_offset += VLAN_HLEN; #endif /* CONFIG_QETH_VLAN */ } - tcph = eddp->skb->h.th; + tcph = tcp_hdr(eddp->skb); while (eddp->skb_offset < eddp->skb->len) { data_len = min((int)skb_shinfo(eddp->skb)->gso_size, (int)(eddp->skb->len - eddp->skb_offset)); @@ -473,20 +474,24 @@ qeth_eddp_fill_context_tcp(struct qeth_e QETH_DBF_TEXT(trace, 5, "eddpficx"); /* create our segmentation headers and copy original headers */ if (skb->protocol == htons(ETH_P_IP)) - eddp = qeth_eddp_create_eddp_data(qhdr, (u8 *)skb->nh.iph, - skb->nh.iph->ihl*4, - (u8 *)skb->h.th, skb->h.th->doff*4); + eddp = qeth_eddp_create_eddp_data(qhdr, + skb_network_header(skb), + ip_hdrlen(skb), + skb_transport_header(skb), + tcp_hdrlen(skb)); else - eddp = qeth_eddp_create_eddp_data(qhdr, (u8 *)skb->nh.ipv6h, - sizeof(struct ipv6hdr), - (u8 *)skb->h.th, skb->h.th->doff*4); + eddp = qeth_eddp_create_eddp_data(qhdr, + skb_network_header(skb), + sizeof(struct ipv6hdr), + skb_transport_header(skb), + tcp_hdrlen(skb)); if (eddp == NULL) { QETH_DBF_TEXT(trace, 2, "eddpfcnm"); return -ENOMEM; } if (qhdr->hdr.l2.id == QETH_HEADER_TYPE_LAYER2) { - skb->mac.raw = (skb->data) + sizeof(struct qeth_hdr); + skb_set_mac_header(skb, sizeof(struct qeth_hdr)); memcpy(&eddp->mac, eth_hdr(skb), ETH_HLEN); #ifdef CONFIG_QETH_VLAN if (eddp->mac.h_proto == __constant_htons(ETH_P_8021Q)) { @@ -590,12 +595,13 @@ qeth_eddp_create_context_tcp(struct qeth QETH_DBF_TEXT(trace, 5, "creddpct"); if (skb->protocol == htons(ETH_P_IP)) ctx = qeth_eddp_create_context_generic(card, skb, - sizeof(struct qeth_hdr) + skb->nh.iph->ihl*4 + - skb->h.th->doff*4); + (sizeof(struct qeth_hdr) + + ip_hdrlen(skb) + + tcp_hdrlen(skb))); else if (skb->protocol == htons(ETH_P_IPV6)) ctx = qeth_eddp_create_context_generic(card, skb, sizeof(struct qeth_hdr) + sizeof(struct ipv6hdr) + - skb->h.th->doff*4); + tcp_hdrlen(skb)); else QETH_DBF_TEXT(trace, 2, "cetcpinv"); diff -puN drivers/s390/net/qeth_main.c~git-net drivers/s390/net/qeth_main.c --- a/drivers/s390/net/qeth_main.c~git-net +++ a/drivers/s390/net/qeth_main.c @@ -2278,7 +2278,7 @@ qeth_type_trans(struct sk_buff *skb, str (card->info.link_type == QETH_LINK_TYPE_LANE_TR)) return tr_type_trans(skb,dev); #endif /* CONFIG_TR */ - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); skb_pull(skb, ETH_HLEN ); eth = eth_hdr(skb); @@ -2306,9 +2306,9 @@ qeth_rebuild_skb_fake_ll_tr(struct qeth_ struct iphdr *ip_hdr; QETH_DBF_TEXT(trace,5,"skbfktr"); - skb->mac.raw = skb->data - QETH_FAKE_LL_LEN_TR; + skb_set_mac_header(skb, -QETH_FAKE_LL_LEN_TR); /* this is a fake ethernet header */ - fake_hdr = (struct trh_hdr *) skb->mac.raw; + fake_hdr = tr_hdr(skb); /* the destination MAC address */ switch (skb->pkt_type){ @@ -2359,9 +2359,9 @@ qeth_rebuild_skb_fake_ll_eth(struct qeth struct iphdr *ip_hdr; QETH_DBF_TEXT(trace,5,"skbfketh"); - skb->mac.raw = skb->data - QETH_FAKE_LL_LEN_ETH; + skb_set_mac_header(skb, -QETH_FAKE_LL_LEN_ETH); /* this is a fake ethernet header */ - fake_hdr = (struct ethhdr *) skb->mac.raw; + fake_hdr = eth_hdr(skb); /* the destination MAC address */ switch (skb->pkt_type){ @@ -2461,7 +2461,7 @@ qeth_rebuild_skb(struct qeth_card *card, if (card->options.fake_ll) qeth_rebuild_skb_fake_ll(card, skb, hdr); else - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); skb->ip_summed = card->options.checksum_type; if (card->options.checksum_type == HW_CHECKSUMMING){ if ( (hdr->hdr.l3.ext_flags & @@ -2501,7 +2501,8 @@ qeth_process_inbound_buffer(struct qeth_ vlan_tag = qeth_rebuild_skb(card, skb, hdr); else { /*in case of OSN*/ skb_push(skb, sizeof(struct qeth_hdr)); - memcpy(skb->data, hdr, sizeof(struct qeth_hdr)); + skb_copy_to_linear_data(skb, hdr, + sizeof(struct qeth_hdr)); } /* is device UP ? */ if (!(card->dev->flags & IFF_UP)){ @@ -3778,9 +3779,11 @@ qeth_get_cast_type(struct qeth_card *car } /* try something else */ if (skb->protocol == ETH_P_IPV6) - return (skb->nh.raw[24] == 0xff) ? RTN_MULTICAST : 0; + return (skb_network_header(skb)[24] == 0xff) ? + RTN_MULTICAST : 0; else if (skb->protocol == ETH_P_IP) - return ((skb->nh.raw[16] & 0xf0) == 0xe0) ? RTN_MULTICAST : 0; + return ((skb_network_header(skb)[16] & 0xf0) == 0xe0) ? + RTN_MULTICAST : 0; /* ... */ if (!memcmp(skb->data, skb->dev->broadcast, 6)) return RTN_BROADCAST; @@ -3818,18 +3821,20 @@ qeth_get_priority_queue(struct qeth_card return card->info.is_multicast_different & (card->qdio.no_out_queues - 1); if (card->qdio.do_prio_queueing && (ipv == 4)) { + const u8 tos = ip_hdr(skb)->tos; + if (card->qdio.do_prio_queueing==QETH_PRIO_Q_ING_TOS){ - if (skb->nh.iph->tos & IP_TOS_NOTIMPORTANT) + if (tos & IP_TOS_NOTIMPORTANT) return 3; - if (skb->nh.iph->tos & IP_TOS_HIGHRELIABILITY) + if (tos & IP_TOS_HIGHRELIABILITY) return 2; - if (skb->nh.iph->tos & IP_TOS_HIGHTHROUGHPUT) + if (tos & IP_TOS_HIGHTHROUGHPUT) return 1; - if (skb->nh.iph->tos & IP_TOS_LOWDELAY) + if (tos & IP_TOS_LOWDELAY) return 0; } if (card->qdio.do_prio_queueing==QETH_PRIO_Q_ING_PREC) - return 3 - (skb->nh.iph->tos >> 6); + return 3 - (tos >> 6); } else if (card->qdio.do_prio_queueing && (ipv == 6)) { /* TODO: IPv6!!! */ } @@ -3866,9 +3871,9 @@ __qeth_prepare_skb(struct qeth_card *car * memcpys instead of one memmove to save cycles. */ skb_push(skb, VLAN_HLEN); - memcpy(skb->data, skb->data + 4, 4); - memcpy(skb->data + 4, skb->data + 8, 4); - memcpy(skb->data + 8, skb->data + 12, 4); + skb_copy_to_linear_data(skb, skb->data + 4, 4); + skb_copy_to_linear_data_offset(skb, 4, skb->data + 8, 4); + skb_copy_to_linear_data_offset(skb, 8, skb->data + 12, 4); tag = (u16 *)(skb->data + 12); /* * first two bytes = ETH_P_8021Q (0x8100) @@ -4039,7 +4044,8 @@ qeth_fill_header(struct qeth_card *card, *((u32 *) skb->dst->neighbour->primary_key); } else { /* fill in destination address used in ip header */ - *((u32 *) (&hdr->hdr.l3.dest_addr[12])) = skb->nh.iph->daddr; + *((u32 *)(&hdr->hdr.l3.dest_addr[12])) = + ip_hdr(skb)->daddr; } } else if (ipv == 6) { /* IPv6 or passthru */ hdr->hdr.l3.flags = qeth_get_qeth_hdr_flags6(cast_type); @@ -4048,7 +4054,8 @@ qeth_fill_header(struct qeth_card *card, skb->dst->neighbour->primary_key, 16); } else { /* fill in destination address used in ip header */ - memcpy(hdr->hdr.l3.dest_addr, &skb->nh.ipv6h->daddr, 16); + memcpy(hdr->hdr.l3.dest_addr, + &ipv6_hdr(skb)->daddr, 16); } } else { /* passthrough */ if((skb->dev->type == ARPHRD_IEEE802_TR) && diff -puN drivers/s390/net/qeth_tso.h~git-net drivers/s390/net/qeth_tso.h --- a/drivers/s390/net/qeth_tso.h~git-net +++ a/drivers/s390/net/qeth_tso.h @@ -40,8 +40,8 @@ qeth_tso_fill_header(struct qeth_card *c QETH_DBF_TEXT(trace, 5, "tsofhdr"); hdr = (struct qeth_hdr_tso *) skb->data; - iph = skb->nh.iph; - tcph = skb->h.th; + iph = ip_hdr(skb); + tcph = tcp_hdr(skb); /*fix header to TSO values ...*/ hdr->hdr.hdr.l3.id = QETH_HEADER_TYPE_TSO; /*set values which are fix for the first approach ...*/ @@ -63,13 +63,9 @@ qeth_tso_fill_header(struct qeth_card *c static inline void qeth_tso_set_tcpip_header(struct qeth_card *card, struct sk_buff *skb) { - struct iphdr *iph; - struct ipv6hdr *ip6h; - struct tcphdr *tcph; - - iph = skb->nh.iph; - ip6h = skb->nh.ipv6h; - tcph = skb->h.th; + struct iphdr *iph = ip_hdr(skb); + struct ipv6hdr *ip6h = ipv6_hdr(skb); + struct tcphdr *tcph = tcp_hdr(skb); tcph->check = 0; if (skb->protocol == ETH_P_IPV6) { diff -puN drivers/scsi/scsi_netlink.c~git-net drivers/scsi/scsi_netlink.c --- a/drivers/scsi/scsi_netlink.c~git-net +++ a/drivers/scsi/scsi_netlink.c @@ -50,7 +50,7 @@ scsi_nl_rcv_msg(struct sk_buff *skb) while (skb->len >= NLMSG_SPACE(0)) { err = 0; - nlh = (struct nlmsghdr *) skb->data; + nlh = nlmsg_hdr(skb); if ((nlh->nlmsg_len < (sizeof(*nlh) + sizeof(*hdr))) || (skb->len < nlh->nlmsg_len)) { printk(KERN_WARNING "%s: discarding partial skb\n", @@ -168,7 +168,8 @@ scsi_netlink_init(void) } scsi_nl_sock = netlink_kernel_create(NETLINK_SCSITRANSPORT, - SCSI_NL_GRP_CNT, scsi_nl_rcv, THIS_MODULE); + SCSI_NL_GRP_CNT, scsi_nl_rcv, NULL, + THIS_MODULE); if (!scsi_nl_sock) { printk(KERN_ERR "%s: register of recieve handler failed\n", __FUNCTION__); diff -puN drivers/scsi/scsi_transport_iscsi.c~git-net drivers/scsi/scsi_transport_iscsi.c --- a/drivers/scsi/scsi_transport_iscsi.c~git-net +++ a/drivers/scsi/scsi_transport_iscsi.c @@ -1081,7 +1081,7 @@ iscsi_if_rx(struct sock *sk, int len) struct nlmsghdr *nlh; struct iscsi_uevent *ev; - nlh = (struct nlmsghdr *)skb->data; + nlh = nlmsg_hdr(skb); if (nlh->nlmsg_len < sizeof(*nlh) || skb->len < nlh->nlmsg_len) { break; @@ -1435,7 +1435,7 @@ static __init int iscsi_transport_init(v if (err) goto unregister_conn_class; - nls = netlink_kernel_create(NETLINK_ISCSI, 1, iscsi_if_rx, + nls = netlink_kernel_create(NETLINK_ISCSI, 1, iscsi_if_rx, NULL, THIS_MODULE); if (!nls) { err = -ENOBUFS; diff -puN drivers/usb/atm/usbatm.c~git-net drivers/usb/atm/usbatm.c --- a/drivers/usb/atm/usbatm.c~git-net +++ a/drivers/usb/atm/usbatm.c @@ -343,7 +343,7 @@ static void usbatm_extract_one_cell(stru UDSL_ASSERT(sarb->tail + ATM_CELL_PAYLOAD <= sarb->end); } - memcpy(sarb->tail, source + ATM_CELL_HEADER, ATM_CELL_PAYLOAD); + memcpy(skb_tail_pointer(sarb), source + ATM_CELL_HEADER, ATM_CELL_PAYLOAD); __skb_put(sarb, ATM_CELL_PAYLOAD); if (pti & 1) { @@ -370,7 +370,7 @@ static void usbatm_extract_one_cell(stru goto out; } - if (crc32_be(~0, sarb->tail - pdu_length, pdu_length) != 0xc704dd7b) { + if (crc32_be(~0, skb_tail_pointer(sarb) - pdu_length, pdu_length) != 0xc704dd7b) { atm_rldbg(instance, "%s: packet failed crc check (vcc: 0x%p)!\n", __func__, vcc); atomic_inc(&vcc->stats->rx_err); @@ -396,7 +396,9 @@ static void usbatm_extract_one_cell(stru goto out; /* atm_charge increments rx_drop */ } - memcpy(skb->data, sarb->tail - pdu_length, length); + skb_copy_to_linear_data(skb, + skb_tail_pointer(sarb) - pdu_length, + length); __skb_put(skb, length); vdbg("%s: sending skb 0x%p, skb->len %u, skb->truesize %u", @@ -484,7 +486,7 @@ static unsigned int usbatm_write_cells(s ptr[4] = 0xec; ptr += ATM_CELL_HEADER; - memcpy(ptr, skb->data, data_len); + skb_copy_from_linear_data(skb, ptr, data_len); ptr += data_len; __skb_pull(skb, data_len); diff -puN drivers/usb/gadget/ether.c~git-net drivers/usb/gadget/ether.c --- a/drivers/usb/gadget/ether.c~git-net +++ a/drivers/usb/gadget/ether.c @@ -1766,7 +1766,6 @@ static void rx_complete (struct usb_ep * break; } - skb->dev = dev->net; skb->protocol = eth_type_trans (skb, dev->net); dev->stats.rx_packets++; dev->stats.rx_bytes += skb->len; diff -puN drivers/usb/net/asix.c~git-net drivers/usb/net/asix.c --- a/drivers/usb/net/asix.c~git-net +++ a/drivers/usb/net/asix.c @@ -298,7 +298,7 @@ static int asix_rx_fixup(struct usbnet * if (ax_skb) { ax_skb->len = size; ax_skb->data = packet; - ax_skb->tail = packet + size; + skb_set_tail_pointer(ax_skb, size); usbnet_skb_return(dev, ax_skb); } else { return 0; @@ -338,7 +338,7 @@ static struct sk_buff *asix_tx_fixup(str && ((headroom + tailroom) >= (4 + padlen))) { if ((headroom < 4) || (tailroom < padlen)) { skb->data = memmove(skb->head + 4, skb->data, skb->len); - skb->tail = skb->data + skb->len; + skb_set_tail_pointer(skb, skb->len); } } else { struct sk_buff *skb2; @@ -352,11 +352,11 @@ static struct sk_buff *asix_tx_fixup(str skb_push(skb, 4); packet_len = (((skb->len - 4) ^ 0x0000ffff) << 16) + (skb->len - 4); cpu_to_le32s(&packet_len); - memcpy(skb->data, &packet_len, sizeof(packet_len)); + skb_copy_to_linear_data(skb, &packet_len, sizeof(packet_len)); if ((skb->len % 512) == 0) { cpu_to_le32s(&padbytes); - memcpy( skb->tail, &padbytes, sizeof(padbytes)); + memcpy(skb_tail_pointer(skb), &padbytes, sizeof(padbytes)); skb_put(skb, sizeof(padbytes)); } return skb; diff -puN drivers/usb/net/catc.c~git-net drivers/usb/net/catc.c --- a/drivers/usb/net/catc.c~git-net +++ a/drivers/usb/net/catc.c @@ -255,7 +255,6 @@ static void catc_rx_done(struct urb *urb if (!(skb = dev_alloc_skb(pkt_len))) return; - skb->dev = catc->netdev; eth_copy_and_sum(skb, pkt_start + pkt_offset, pkt_len, 0); skb_put(skb, pkt_len); @@ -419,7 +418,7 @@ static int catc_hard_start_xmit(struct s catc->tx_ptr = (((catc->tx_ptr - 1) >> 6) + 1) << 6; tx_buf = catc->tx_buf[catc->tx_idx] + catc->tx_ptr; *((u16*)tx_buf) = (catc->is_f5u011) ? cpu_to_be16((u16)skb->len) : cpu_to_le16((u16)skb->len); - memcpy(tx_buf + 2, skb->data, skb->len); + skb_copy_from_linear_data(skb, tx_buf + 2, skb->len); catc->tx_ptr += skb->len + 2; if (!test_and_set_bit(TX_RUNNING, &catc->flags)) diff -puN drivers/usb/net/gl620a.c~git-net drivers/usb/net/gl620a.c --- a/drivers/usb/net/gl620a.c~git-net +++ a/drivers/usb/net/gl620a.c @@ -157,7 +157,7 @@ genelink_tx_fixup(struct usbnet *dev, st if ((headroom < (4 + 4*1)) || (tailroom < padlen)) { skb->data = memmove(skb->head + (4 + 4*1), skb->data, skb->len); - skb->tail = skb->data + skb->len; + skb_set_tail_pointer(skb, skb->len); } } else { struct sk_buff *skb2; diff -puN drivers/usb/net/kaweth.c~git-net drivers/usb/net/kaweth.c --- a/drivers/usb/net/kaweth.c~git-net +++ a/drivers/usb/net/kaweth.c @@ -636,8 +636,6 @@ static void kaweth_usb_receive(struct ur skb_reserve(skb, 2); /* Align IP on 16 byte boundaries */ - skb->dev = net; - eth_copy_and_sum(skb, kaweth->rx_buf + 2, pkt_len, 0); skb_put(skb, pkt_len); diff -puN drivers/usb/net/net1080.c~git-net drivers/usb/net/net1080.c --- a/drivers/usb/net/net1080.c~git-net +++ a/drivers/usb/net/net1080.c @@ -520,7 +520,7 @@ net1080_tx_fixup(struct usbnet *dev, str skb->data = memmove(skb->head + sizeof (struct nc_header), skb->data, skb->len); - skb->tail = skb->data + len; + skb_set_tail_pointer(skb, len); goto encapsulate; } } diff -puN drivers/usb/net/pegasus.c~git-net drivers/usb/net/pegasus.c --- a/drivers/usb/net/pegasus.c~git-net +++ a/drivers/usb/net/pegasus.c @@ -574,7 +574,6 @@ static void fill_skb_pool(pegasus_t * pe */ if (pegasus->rx_pool[i] == NULL) return; - pegasus->rx_pool[i]->dev = pegasus->net; skb_reserve(pegasus->rx_pool[i], 2); } } @@ -883,7 +882,7 @@ static int pegasus_start_xmit(struct sk_ netif_stop_queue(net); ((__le16 *) pegasus->tx_buff)[0] = cpu_to_le16(l16); - memcpy(pegasus->tx_buff + 2, skb->data, skb->len); + skb_copy_from_linear_data(skb, pegasus->tx_buff + 2, skb->len); usb_fill_bulk_urb(pegasus->tx_urb, pegasus->usb, usb_sndbulkpipe(pegasus->usb, 2), pegasus->tx_buff, count, @@ -1408,8 +1407,10 @@ static void pegasus_disconnect(struct us unlink_all_urbs(pegasus); free_all_urbs(pegasus); free_skb_pool(pegasus); - if (pegasus->rx_skb) + if (pegasus->rx_skb != NULL) { dev_kfree_skb(pegasus->rx_skb); + pegasus->rx_skb = NULL; + } free_netdev(pegasus->net); } diff -puN drivers/usb/net/rndis_host.c~git-net drivers/usb/net/rndis_host.c --- a/drivers/usb/net/rndis_host.c~git-net +++ a/drivers/usb/net/rndis_host.c @@ -588,7 +588,7 @@ rndis_tx_fixup(struct usbnet *dev, struc if (likely((sizeof *hdr) <= room)) { skb->data = memmove(skb->head + sizeof *hdr, skb->data, len); - skb->tail = skb->data + len; + skb_set_tail_pointer(skb, len); goto fill; } } diff -puN drivers/usb/net/rtl8150.c~git-net drivers/usb/net/rtl8150.c --- a/drivers/usb/net/rtl8150.c~git-net +++ a/drivers/usb/net/rtl8150.c @@ -646,7 +646,6 @@ static void fill_skb_pool(rtl8150_t *dev if (!skb) { return; } - skb->dev = dev->netdev; skb_reserve(skb, 2); dev->rx_skb_pool[i] = skb; } diff -puN drivers/usb/net/usbnet.c~git-net drivers/usb/net/usbnet.c --- a/drivers/usb/net/usbnet.c~git-net +++ a/drivers/usb/net/usbnet.c @@ -203,7 +203,6 @@ void usbnet_skb_return (struct usbnet *d { int status; - skb->dev = dev->net; skb->protocol = eth_type_trans (skb, dev->net); dev->stats.rx_packets++; dev->stats.rx_bytes += skb->len; diff -puN fs/compat_ioctl.c~git-net fs/compat_ioctl.c --- a/fs/compat_ioctl.c~git-net +++ a/fs/compat_ioctl.c @@ -266,6 +266,23 @@ static int do_siocgstamp(unsigned int fd return err; } +static int do_siocgstampns(unsigned int fd, unsigned int cmd, unsigned long arg) +{ + struct compat_timespec __user *up = compat_ptr(arg); + struct timespec kts; + mm_segment_t old_fs = get_fs(); + int err; + + set_fs(KERNEL_DS); + err = sys_ioctl(fd, cmd, (unsigned long)&kts); + set_fs(old_fs); + if (!err) { + err = put_user(kts.tv_sec, &up->tv_sec); + err |= __put_user(kts.tv_nsec, &up->tv_nsec); + } + return err; +} + struct ifmap32 { compat_ulong_t mem_start; compat_ulong_t mem_end; @@ -2437,6 +2454,7 @@ HANDLE_IOCTL(SIOCBRDELIF, dev_ifsioc) /* Note SIOCRTMSG is no longer, so this is safe and * the user would have seen just an -EINVAL anyways. */ HANDLE_IOCTL(SIOCRTMSG, ret_einval) HANDLE_IOCTL(SIOCGSTAMP, do_siocgstamp) +HANDLE_IOCTL(SIOCGSTAMPNS, do_siocgstampns) #endif #ifdef CONFIG_BLOCK HANDLE_IOCTL(HDIO_GETGEO, hdio_getgeo) diff -puN fs/ecryptfs/netlink.c~git-net fs/ecryptfs/netlink.c --- a/fs/ecryptfs/netlink.c~git-net +++ a/fs/ecryptfs/netlink.c @@ -97,7 +97,7 @@ out: */ static int ecryptfs_process_nl_response(struct sk_buff *skb) { - struct nlmsghdr *nlh = (struct nlmsghdr*)skb->data; + struct nlmsghdr *nlh = nlmsg_hdr(skb); struct ecryptfs_message *msg = NLMSG_DATA(nlh); int rc; @@ -181,7 +181,7 @@ receive: "rc = [%d]\n", rc); return; } - nlh = (struct nlmsghdr *)skb->data; + nlh = nlmsg_hdr(skb); if (!NLMSG_OK(nlh, skb->len)) { ecryptfs_printk(KERN_ERR, "Received corrupt netlink " "message\n"); @@ -229,7 +229,7 @@ int ecryptfs_init_netlink(void) ecryptfs_nl_sock = netlink_kernel_create(NETLINK_ECRYPTFS, 0, ecryptfs_receive_nl_message, - THIS_MODULE); + NULL, THIS_MODULE); if (!ecryptfs_nl_sock) { rc = -EIO; ecryptfs_printk(KERN_ERR, "Failed to create netlink socket\n"); diff -puN include/asm-alpha/socket.h~git-net include/asm-alpha/socket.h --- a/include/asm-alpha/socket.h~git-net +++ a/include/asm-alpha/socket.h @@ -52,6 +52,8 @@ #define SO_PEERSEC 30 #define SO_PASSSEC 34 +#define SO_TIMESTAMPNS 35 +#define SCM_TIMESTAMPNS SO_TIMESTAMPNS /* Security levels - as per NRL IPv6 - don't actually do anything */ #define SO_SECURITY_AUTHENTICATION 19 diff -puN include/asm-alpha/sockios.h~git-net include/asm-alpha/sockios.h --- a/include/asm-alpha/sockios.h~git-net +++ a/include/asm-alpha/sockios.h @@ -10,6 +10,7 @@ #define SIOCSPGRP _IOW('s', 8, pid_t) #define SIOCGPGRP _IOR('s', 9, pid_t) -#define SIOCGSTAMP 0x8906 /* Get stamp - linux-specific */ +#define SIOCGSTAMP 0x8906 /* Get stamp (timeval) */ +#define SIOCGSTAMPNS 0x8907 /* Get stamp (timespec) */ #endif /* _ASM_ALPHA_SOCKIOS_H */ diff -puN include/asm-arm/div64.h~git-net include/asm-arm/div64.h --- a/include/asm-arm/div64.h~git-net +++ a/include/asm-arm/div64.h @@ -2,6 +2,7 @@ #define __ASM_ARM_DIV64 #include +#include /* * The semantics of do_div() are: @@ -223,4 +224,6 @@ #endif +extern uint64_t div64_64(uint64_t dividend, uint64_t divisor); + #endif diff -puN include/asm-arm/socket.h~git-net include/asm-arm/socket.h --- a/include/asm-arm/socket.h~git-net +++ a/include/asm-arm/socket.h @@ -49,5 +49,7 @@ #define SO_PEERSEC 31 #define SO_PASSSEC 34 +#define SO_TIMESTAMPNS 35 +#define SCM_TIMESTAMPNS SO_TIMESTAMPNS #endif /* _ASM_SOCKET_H */ diff -puN include/asm-arm/sockios.h~git-net include/asm-arm/sockios.h --- a/include/asm-arm/sockios.h~git-net +++ a/include/asm-arm/sockios.h @@ -7,6 +7,7 @@ #define FIOGETOWN 0x8903 #define SIOCGPGRP 0x8904 #define SIOCATMARK 0x8905 -#define SIOCGSTAMP 0x8906 /* Get stamp */ +#define SIOCGSTAMP 0x8906 /* Get stamp (timeval) */ +#define SIOCGSTAMPNS 0x8907 /* Get stamp (timespec) */ #endif diff -puN include/asm-arm26/socket.h~git-net include/asm-arm26/socket.h --- a/include/asm-arm26/socket.h~git-net +++ a/include/asm-arm26/socket.h @@ -49,5 +49,7 @@ #define SO_PEERSEC 31 #define SO_PASSSEC 34 +#define SO_TIMESTAMPNS 35 +#define SCM_TIMESTAMPNS SO_TIMESTAMPNS #endif /* _ASM_SOCKET_H */ diff -puN include/asm-arm26/sockios.h~git-net include/asm-arm26/sockios.h --- a/include/asm-arm26/sockios.h~git-net +++ a/include/asm-arm26/sockios.h @@ -7,6 +7,7 @@ #define FIOGETOWN 0x8903 #define SIOCGPGRP 0x8904 #define SIOCATMARK 0x8905 -#define SIOCGSTAMP 0x8906 /* Get stamp */ +#define SIOCGSTAMP 0x8906 /* Get stamp (timeval) */ +#define SIOCGSTAMPNS 0x8907 /* Get stamp (timespec) */ #endif diff -puN include/asm-avr32/socket.h~git-net include/asm-avr32/socket.h --- a/include/asm-avr32/socket.h~git-net +++ a/include/asm-avr32/socket.h @@ -49,5 +49,7 @@ #define SO_PEERSEC 31 #define SO_PASSSEC 34 +#define SO_TIMESTAMPNS 35 +#define SCM_TIMESTAMPNS SO_TIMESTAMPNS #endif /* __ASM_AVR32_SOCKET_H */ diff -puN include/asm-avr32/sockios.h~git-net include/asm-avr32/sockios.h --- a/include/asm-avr32/sockios.h~git-net +++ a/include/asm-avr32/sockios.h @@ -7,6 +7,7 @@ #define FIOGETOWN 0x8903 #define SIOCGPGRP 0x8904 #define SIOCATMARK 0x8905 -#define SIOCGSTAMP 0x8906 /* Get stamp */ +#define SIOCGSTAMP 0x8906 /* Get stamp (timeval) */ +#define SIOCGSTAMPNS 0x8907 /* Get stamp (timespec) */ #endif /* __ASM_AVR32_SOCKIOS_H */ diff -puN include/asm-cris/socket.h~git-net include/asm-cris/socket.h --- a/include/asm-cris/socket.h~git-net +++ a/include/asm-cris/socket.h @@ -51,6 +51,8 @@ #define SO_PEERSEC 31 #define SO_PASSSEC 34 +#define SO_TIMESTAMPNS 35 +#define SCM_TIMESTAMPNS SO_TIMESTAMPNS #endif /* _ASM_SOCKET_H */ diff -puN include/asm-cris/sockios.h~git-net include/asm-cris/sockios.h --- a/include/asm-cris/sockios.h~git-net +++ a/include/asm-cris/sockios.h @@ -7,6 +7,7 @@ #define FIOGETOWN 0x8903 #define SIOCGPGRP 0x8904 #define SIOCATMARK 0x8905 -#define SIOCGSTAMP 0x8906 /* Get stamp */ +#define SIOCGSTAMP 0x8906 /* Get stamp (timeval) */ +#define SIOCGSTAMPNS 0x8907 /* Get stamp (timespec) */ #endif diff -puN include/asm-frv/socket.h~git-net include/asm-frv/socket.h --- a/include/asm-frv/socket.h~git-net +++ a/include/asm-frv/socket.h @@ -49,6 +49,8 @@ #define SO_PEERSEC 31 #define SO_PASSSEC 34 +#define SO_TIMESTAMPNS 35 +#define SCM_TIMESTAMPNS SO_TIMESTAMPNS #endif /* _ASM_SOCKET_H */ diff -puN include/asm-frv/sockios.h~git-net include/asm-frv/sockios.h --- a/include/asm-frv/sockios.h~git-net +++ a/include/asm-frv/sockios.h @@ -7,7 +7,8 @@ #define FIOGETOWN 0x8903 #define SIOCGPGRP 0x8904 #define SIOCATMARK 0x8905 -#define SIOCGSTAMP 0x8906 /* Get stamp */ +#define SIOCGSTAMP 0x8906 /* Get stamp (timeval) */ +#define SIOCGSTAMPNS 0x8907 /* Get stamp (timespec) */ #endif /* _ASM_SOCKIOS__ */ diff -puN include/asm-generic/div64.h~git-net include/asm-generic/div64.h --- a/include/asm-generic/div64.h~git-net +++ a/include/asm-generic/div64.h @@ -30,6 +30,11 @@ __rem; \ }) +static inline uint64_t div64_64(uint64_t dividend, uint64_t divisor) +{ + return dividend / divisor; +} + #elif BITS_PER_LONG == 32 extern uint32_t __div64_32(uint64_t *dividend, uint32_t divisor); @@ -49,6 +54,8 @@ extern uint32_t __div64_32(uint64_t *div __rem; \ }) +extern uint64_t div64_64(uint64_t dividend, uint64_t divisor); + #else /* BITS_PER_LONG == ?? */ # error do_div() does not yet support the C64 diff -puN include/asm-h8300/socket.h~git-net include/asm-h8300/socket.h --- a/include/asm-h8300/socket.h~git-net +++ a/include/asm-h8300/socket.h @@ -49,5 +49,7 @@ #define SO_PEERSEC 31 #define SO_PASSSEC 34 +#define SO_TIMESTAMPNS 35 +#define SCM_TIMESTAMPNS SO_TIMESTAMPNS #endif /* _ASM_SOCKET_H */ diff -puN include/asm-h8300/sockios.h~git-net include/asm-h8300/sockios.h --- a/include/asm-h8300/sockios.h~git-net +++ a/include/asm-h8300/sockios.h @@ -7,6 +7,7 @@ #define FIOGETOWN 0x8903 #define SIOCGPGRP 0x8904 #define SIOCATMARK 0x8905 -#define SIOCGSTAMP 0x8906 /* Get stamp */ +#define SIOCGSTAMP 0x8906 /* Get stamp (timeval) */ +#define SIOCGSTAMPNS 0x8907 /* Get stamp (timespec) */ #endif /* __ARCH_H8300_SOCKIOS__ */ diff -puN include/asm-i386/div64.h~git-net include/asm-i386/div64.h --- a/include/asm-i386/div64.h~git-net +++ a/include/asm-i386/div64.h @@ -1,6 +1,8 @@ #ifndef __I386_DIV64 #define __I386_DIV64 +#include + /* * do_div() is NOT a C function. It wants to return * two values (the quotient and the remainder), but @@ -45,4 +47,6 @@ div_ll_X_l_rem(long long divs, long div, return dum2; } + +extern uint64_t div64_64(uint64_t dividend, uint64_t divisor); #endif diff -puN include/asm-i386/socket.h~git-net include/asm-i386/socket.h --- a/include/asm-i386/socket.h~git-net +++ a/include/asm-i386/socket.h @@ -49,5 +49,7 @@ #define SO_PEERSEC 31 #define SO_PASSSEC 34 +#define SO_TIMESTAMPNS 35 +#define SCM_TIMESTAMPNS SO_TIMESTAMPNS #endif /* _ASM_SOCKET_H */ diff -puN include/asm-i386/sockios.h~git-net include/asm-i386/sockios.h --- a/include/asm-i386/sockios.h~git-net +++ a/include/asm-i386/sockios.h @@ -7,6 +7,7 @@ #define FIOGETOWN 0x8903 #define SIOCGPGRP 0x8904 #define SIOCATMARK 0x8905 -#define SIOCGSTAMP 0x8906 /* Get stamp */ +#define SIOCGSTAMP 0x8906 /* Get stamp (timeval) */ +#define SIOCGSTAMPNS 0x8907 /* Get stamp (timespec) */ #endif diff -puN include/asm-ia64/socket.h~git-net include/asm-ia64/socket.h --- a/include/asm-ia64/socket.h~git-net +++ a/include/asm-ia64/socket.h @@ -58,5 +58,7 @@ #define SO_PEERSEC 31 #define SO_PASSSEC 34 +#define SO_TIMESTAMPNS 35 +#define SCM_TIMESTAMPNS SO_TIMESTAMPNS #endif /* _ASM_IA64_SOCKET_H */ diff -puN include/asm-ia64/sockios.h~git-net include/asm-ia64/sockios.h --- a/include/asm-ia64/sockios.h~git-net +++ a/include/asm-ia64/sockios.h @@ -14,6 +14,7 @@ #define FIOGETOWN 0x8903 #define SIOCGPGRP 0x8904 #define SIOCATMARK 0x8905 -#define SIOCGSTAMP 0x8906 /* Get stamp */ +#define SIOCGSTAMP 0x8906 /* Get stamp (timeval) */ +#define SIOCGSTAMPNS 0x8907 /* Get stamp (timespec) */ #endif /* _ASM_IA64_SOCKIOS_H */ diff -puN include/asm-m32r/socket.h~git-net include/asm-m32r/socket.h --- a/include/asm-m32r/socket.h~git-net +++ a/include/asm-m32r/socket.h @@ -49,5 +49,7 @@ #define SO_PEERSEC 31 #define SO_PASSSEC 34 +#define SO_TIMESTAMPNS 35 +#define SCM_TIMESTAMPNS SO_TIMESTAMPNS #endif /* _ASM_M32R_SOCKET_H */ diff -puN include/asm-m32r/sockios.h~git-net include/asm-m32r/sockios.h --- a/include/asm-m32r/sockios.h~git-net +++ a/include/asm-m32r/sockios.h @@ -7,6 +7,7 @@ #define FIOGETOWN 0x8903 #define SIOCGPGRP 0x8904 #define SIOCATMARK 0x8905 -#define SIOCGSTAMP 0x8906 /* Get stamp */ +#define SIOCGSTAMP 0x8906 /* Get stamp (timeval) */ +#define SIOCGSTAMPNS 0x8907 /* Get stamp (timespec) */ #endif /* _ASM_M32R_SOCKIOS_H */ diff -puN include/asm-m68k/div64.h~git-net include/asm-m68k/div64.h --- a/include/asm-m68k/div64.h~git-net +++ a/include/asm-m68k/div64.h @@ -1,6 +1,8 @@ #ifndef _M68K_DIV64_H #define _M68K_DIV64_H +#include + /* n = n / base; return rem; */ #define do_div(n, base) ({ \ @@ -23,4 +25,5 @@ __rem; \ }) +extern uint64_t div64_64(uint64_t dividend, uint64_t divisor); #endif /* _M68K_DIV64_H */ diff -puN include/asm-m68k/socket.h~git-net include/asm-m68k/socket.h --- a/include/asm-m68k/socket.h~git-net +++ a/include/asm-m68k/socket.h @@ -49,5 +49,7 @@ #define SO_PEERSEC 31 #define SO_PASSSEC 34 +#define SO_TIMESTAMPNS 35 +#define SCM_TIMESTAMPNS SO_TIMESTAMPNS #endif /* _ASM_SOCKET_H */ diff -puN include/asm-m68k/sockios.h~git-net include/asm-m68k/sockios.h --- a/include/asm-m68k/sockios.h~git-net +++ a/include/asm-m68k/sockios.h @@ -7,6 +7,7 @@ #define FIOGETOWN 0x8903 #define SIOCGPGRP 0x8904 #define SIOCATMARK 0x8905 -#define SIOCGSTAMP 0x8906 /* Get stamp */ +#define SIOCGSTAMP 0x8906 /* Get stamp (timeval) */ +#define SIOCGSTAMPNS 0x8907 /* Get stamp (timespec) */ #endif /* __ARCH_M68K_SOCKIOS__ */ diff -puN include/asm-mips/div64.h~git-net include/asm-mips/div64.h --- a/include/asm-mips/div64.h~git-net +++ a/include/asm-mips/div64.h @@ -1,6 +1,6 @@ /* * Copyright (C) 2000, 2004 Maciej W. Rozycki - * Copyright (C) 2003 Ralf Baechle + * Copyright (C) 2003, 07 Ralf Baechle (ralf@linux-mips.org) * * This file is subject to the terms and conditions of the GNU General Public * License. See the file "COPYING" in the main directory of this archive @@ -9,6 +9,8 @@ #ifndef _ASM_DIV64_H #define _ASM_DIV64_H +#include + #if (_MIPS_SZLONG == 32) #include @@ -78,6 +80,8 @@ __quot = __quot << 32 | __low; \ (n) = __quot; \ __mod; }) + +extern uint64_t div64_64(uint64_t dividend, uint64_t divisor); #endif /* (_MIPS_SZLONG == 32) */ #if (_MIPS_SZLONG == 64) @@ -101,6 +105,11 @@ (n) = __quot; \ __mod; }) +static inline uint64_t div64_64(uint64_t dividend, uint64_t divisor) +{ + return dividend / divisor; +} + #endif /* (_MIPS_SZLONG == 64) */ #endif /* _ASM_DIV64_H */ diff -puN include/asm-mips/socket.h~git-net include/asm-mips/socket.h --- a/include/asm-mips/socket.h~git-net +++ a/include/asm-mips/socket.h @@ -70,6 +70,8 @@ To add: #define SO_REUSEPORT 0x0200 /* A #define SO_SNDBUFFORCE 31 #define SO_RCVBUFFORCE 33 #define SO_PASSSEC 34 +#define SO_TIMESTAMPNS 35 +#define SCM_TIMESTAMPNS SO_TIMESTAMPNS #ifdef __KERNEL__ diff -puN include/asm-mips/sockios.h~git-net include/asm-mips/sockios.h --- a/include/asm-mips/sockios.h~git-net +++ a/include/asm-mips/sockios.h @@ -20,6 +20,7 @@ #define SIOCSPGRP _IOW('s', 8, pid_t) #define SIOCGPGRP _IOR('s', 9, pid_t) -#define SIOCGSTAMP 0x8906 /* Get stamp - linux-specific */ +#define SIOCGSTAMP 0x8906 /* Get stamp (timeval) */ +#define SIOCGSTAMPNS 0x8907 /* Get stamp (timespec) */ #endif /* _ASM_SOCKIOS_H */ diff -puN include/asm-parisc/socket.h~git-net include/asm-parisc/socket.h --- a/include/asm-parisc/socket.h~git-net +++ a/include/asm-parisc/socket.h @@ -33,6 +33,8 @@ #define SO_PEERCRED 0x4011 #define SO_TIMESTAMP 0x4012 #define SCM_TIMESTAMP SO_TIMESTAMP +#define SO_TIMESTAMPNS 0x4013 +#define SCM_TIMESTAMPNS SO_TIMESTAMPNS /* Security levels - as per NRL IPv6 - don't actually do anything */ #define SO_SECURITY_AUTHENTICATION 0x4016 diff -puN include/asm-parisc/sockios.h~git-net include/asm-parisc/sockios.h --- a/include/asm-parisc/sockios.h~git-net +++ a/include/asm-parisc/sockios.h @@ -7,6 +7,7 @@ #define FIOGETOWN 0x8903 #define SIOCGPGRP 0x8904 #define SIOCATMARK 0x8905 -#define SIOCGSTAMP 0x8906 /* Get stamp */ +#define SIOCGSTAMP 0x8906 /* Get stamp (timeval) */ +#define SIOCGSTAMPNS 0x8907 /* Get stamp (timespec) */ #endif diff -puN include/asm-powerpc/socket.h~git-net include/asm-powerpc/socket.h --- a/include/asm-powerpc/socket.h~git-net +++ a/include/asm-powerpc/socket.h @@ -56,5 +56,7 @@ #define SO_PEERSEC 31 #define SO_PASSSEC 34 +#define SO_TIMESTAMPNS 35 +#define SCM_TIMESTAMPNS SO_TIMESTAMPNS #endif /* _ASM_POWERPC_SOCKET_H */ diff -puN include/asm-powerpc/sockios.h~git-net include/asm-powerpc/sockios.h --- a/include/asm-powerpc/sockios.h~git-net +++ a/include/asm-powerpc/sockios.h @@ -14,6 +14,7 @@ #define FIOGETOWN 0x8903 #define SIOCGPGRP 0x8904 #define SIOCATMARK 0x8905 -#define SIOCGSTAMP 0x8906 /* Get stamp */ +#define SIOCGSTAMP 0x8906 /* Get stamp (timeval) */ +#define SIOCGSTAMPNS 0x8907 /* Get stamp (timespec) */ #endif /* _ASM_POWERPC_SOCKIOS_H */ diff -puN include/asm-s390/socket.h~git-net include/asm-s390/socket.h --- a/include/asm-s390/socket.h~git-net +++ a/include/asm-s390/socket.h @@ -57,5 +57,7 @@ #define SO_PEERSEC 31 #define SO_PASSSEC 34 +#define SO_TIMESTAMPNS 35 +#define SCM_TIMESTAMPNS SO_TIMESTAMPNS #endif /* _ASM_SOCKET_H */ diff -puN include/asm-s390/sockios.h~git-net include/asm-s390/sockios.h --- a/include/asm-s390/sockios.h~git-net +++ a/include/asm-s390/sockios.h @@ -15,6 +15,7 @@ #define FIOGETOWN 0x8903 #define SIOCGPGRP 0x8904 #define SIOCATMARK 0x8905 -#define SIOCGSTAMP 0x8906 /* Get stamp */ +#define SIOCGSTAMP 0x8906 /* Get stamp (timeval) */ +#define SIOCGSTAMPNS 0x8907 /* Get stamp (timespec) */ #endif diff -puN include/asm-sh/socket.h~git-net include/asm-sh/socket.h --- a/include/asm-sh/socket.h~git-net +++ a/include/asm-sh/socket.h @@ -49,5 +49,7 @@ #define SO_PEERSEC 31 #define SO_PASSSEC 34 +#define SO_TIMESTAMPNS 35 +#define SCM_TIMESTAMPNS SO_TIMESTAMPNS #endif /* __ASM_SH_SOCKET_H */ diff -puN include/asm-sh/sockios.h~git-net include/asm-sh/sockios.h --- a/include/asm-sh/sockios.h~git-net +++ a/include/asm-sh/sockios.h @@ -9,5 +9,6 @@ #define SIOCSPGRP _IOW('s', 8, pid_t) #define SIOCGPGRP _IOR('s', 9, pid_t) -#define SIOCGSTAMP _IOR('s', 100, struct timeval) /* Get stamp - linux-specific */ +#define SIOCGSTAMP _IOR('s', 100, struct timeval) /* Get stamp (timeval) */ +#define SIOCGSTAMPNS _IOR('s', 101, struct timespec) /* Get stamp (timespec) */ #endif /* __ASM_SH_SOCKIOS_H */ diff -puN include/asm-sh64/sockios.h~git-net include/asm-sh64/sockios.h --- a/include/asm-sh64/sockios.h~git-net +++ a/include/asm-sh64/sockios.h @@ -20,5 +20,6 @@ #define SIOCSPGRP _IOW('s', 8, pid_t) #define SIOCGPGRP _IOR('s', 9, pid_t) -#define SIOCGSTAMP _IOR('s', 100, struct timeval) /* Get stamp - linux-specific */ +#define SIOCGSTAMP _IOR('s', 100, struct timeval) /* Get stamp (timeval) */ +#define SIOCGSTAMPNS _IOR('s', 101, struct timespec) /* Get stamp (timespec) */ #endif /* __ASM_SH64_SOCKIOS_H */ diff -puN include/asm-sparc/socket.h~git-net include/asm-sparc/socket.h --- a/include/asm-sparc/socket.h~git-net +++ a/include/asm-sparc/socket.h @@ -49,6 +49,8 @@ #define SO_PEERSEC 0x001e #define SO_PASSSEC 0x001f +#define SO_TIMESTAMPNS 0x0021 +#define SCM_TIMESTAMPNS SO_TIMESTAMPNS /* Security levels - as per NRL IPv6 - don't actually do anything */ #define SO_SECURITY_AUTHENTICATION 0x5001 diff -puN include/asm-sparc/sockios.h~git-net include/asm-sparc/sockios.h --- a/include/asm-sparc/sockios.h~git-net +++ a/include/asm-sparc/sockios.h @@ -7,7 +7,8 @@ #define FIOGETOWN 0x8903 #define SIOCGPGRP 0x8904 #define SIOCATMARK 0x8905 -#define SIOCGSTAMP 0x8906 /* Get stamp */ +#define SIOCGSTAMP 0x8906 /* Get stamp (timeval) */ +#define SIOCGSTAMPNS 0x8907 /* Get stamp (timespec) */ #endif /* !(_ASM_SPARC_SOCKIOS_H) */ diff -puN include/asm-sparc64/socket.h~git-net include/asm-sparc64/socket.h --- a/include/asm-sparc64/socket.h~git-net +++ a/include/asm-sparc64/socket.h @@ -49,6 +49,8 @@ #define SO_PEERSEC 0x001e #define SO_PASSSEC 0x001f +#define SO_TIMESTAMPNS 0x0021 +#define SCM_TIMESTAMPNS SO_TIMESTAMPNS /* Security levels - as per NRL IPv6 - don't actually do anything */ #define SO_SECURITY_AUTHENTICATION 0x5001 diff -puN include/asm-sparc64/sockios.h~git-net include/asm-sparc64/sockios.h --- a/include/asm-sparc64/sockios.h~git-net +++ a/include/asm-sparc64/sockios.h @@ -7,7 +7,8 @@ #define FIOGETOWN 0x8903 #define SIOCGPGRP 0x8904 #define SIOCATMARK 0x8905 -#define SIOCGSTAMP 0x8906 /* Get stamp */ +#define SIOCGSTAMP 0x8906 /* Get stamp (timeval) */ +#define SIOCGSTAMPNS 0x8907 /* Get stamp (timespec) */ #endif /* !(_ASM_SPARC64_SOCKIOS_H) */ diff -puN include/asm-um/div64.h~git-net include/asm-um/div64.h --- a/include/asm-um/div64.h~git-net +++ a/include/asm-um/div64.h @@ -3,4 +3,5 @@ #include "asm/arch/div64.h" +extern uint64_t div64_64(uint64_t dividend, uint64_t divisor); #endif diff -puN include/asm-v850/socket.h~git-net include/asm-v850/socket.h --- a/include/asm-v850/socket.h~git-net +++ a/include/asm-v850/socket.h @@ -49,5 +49,7 @@ #define SO_PEERSEC 31 #define SO_PASSSEC 34 +#define SO_TIMESTAMPNS 35 +#define SCM_TIMESTAMPNS SO_TIMESTAMPNS #endif /* __V850_SOCKET_H__ */ diff -puN include/asm-v850/sockios.h~git-net include/asm-v850/sockios.h --- a/include/asm-v850/sockios.h~git-net +++ a/include/asm-v850/sockios.h @@ -7,6 +7,7 @@ #define FIOGETOWN 0x8903 #define SIOCGPGRP 0x8904 #define SIOCATMARK 0x8905 -#define SIOCGSTAMP 0x8906 /* Get stamp */ +#define SIOCGSTAMP 0x8906 /* Get stamp (timeval) */ +#define SIOCGSTAMPNS 0x8907 /* Get stamp (timespec) */ #endif /* __V850_SOCKIOS_H__ */ diff -puN include/asm-x86_64/socket.h~git-net include/asm-x86_64/socket.h --- a/include/asm-x86_64/socket.h~git-net +++ a/include/asm-x86_64/socket.h @@ -49,5 +49,7 @@ #define SO_PEERSEC 31 #define SO_PASSSEC 34 +#define SO_TIMESTAMPNS 35 +#define SCM_TIMESTAMPNS SO_TIMESTAMPNS #endif /* _ASM_SOCKET_H */ diff -puN include/asm-x86_64/sockios.h~git-net include/asm-x86_64/sockios.h --- a/include/asm-x86_64/sockios.h~git-net +++ a/include/asm-x86_64/sockios.h @@ -7,6 +7,7 @@ #define FIOGETOWN 0x8903 #define SIOCGPGRP 0x8904 #define SIOCATMARK 0x8905 -#define SIOCGSTAMP 0x8906 /* Get stamp */ +#define SIOCGSTAMP 0x8906 /* Get stamp (timeval) */ +#define SIOCGSTAMPNS 0x8907 /* Get stamp (timespec) */ #endif diff -puN include/asm-xtensa/div64.h~git-net include/asm-xtensa/div64.h --- a/include/asm-xtensa/div64.h~git-net +++ a/include/asm-xtensa/div64.h @@ -11,9 +11,15 @@ #ifndef _XTENSA_DIV64_H #define _XTENSA_DIV64_H +#include + #define do_div(n,base) ({ \ int __res = n % ((unsigned int) base); \ n /= (unsigned int) base; \ __res; }) +static inline uint64_t div64_64(uint64_t dividend, uint64_t divisor) +{ + return dividend / divisor; +} #endif diff -puN include/asm-xtensa/socket.h~git-net include/asm-xtensa/socket.h --- a/include/asm-xtensa/socket.h~git-net +++ a/include/asm-xtensa/socket.h @@ -60,5 +60,7 @@ #define SO_ACCEPTCONN 30 #define SO_PEERSEC 31 #define SO_PASSSEC 34 +#define SO_TIMESTAMPNS 35 +#define SCM_TIMESTAMPNS SO_TIMESTAMPNS #endif /* _XTENSA_SOCKET_H */ diff -puN include/asm-xtensa/sockios.h~git-net include/asm-xtensa/sockios.h --- a/include/asm-xtensa/sockios.h~git-net +++ a/include/asm-xtensa/sockios.h @@ -25,6 +25,7 @@ #define SIOCSPGRP _IOW('s', 8, pid_t) #define SIOCGPGRP _IOR('s', 9, pid_t) -#define SIOCGSTAMP 0x8906 /* Get stamp - linux-specific */ +#define SIOCGSTAMP 0x8906 /* Get stamp (timeval) */ +#define SIOCGSTAMPNS 0x8907 /* Get stamp (timespec) */ #endif /* _XTENSA_SOCKIOS_H */ diff -puN include/linux/Kbuild~git-net include/linux/Kbuild --- a/include/linux/Kbuild~git-net +++ a/include/linux/Kbuild @@ -69,9 +69,7 @@ header-y += hdsmart.h header-y += hysdn_if.h header-y += i2c-dev.h header-y += i8k.h -header-y += icmp.h header-y += if_arcnet.h -header-y += if_arp.h header-y += if_bonding.h header-y += if_cablemodem.h header-y += if_fc.h @@ -88,7 +86,6 @@ header-y += if_tunnel.h header-y += in6.h header-y += in_route.h header-y += ioctl.h -header-y += ip.h header-y += ipmi_msgdefs.h header-y += ip_mp_alg.h header-y += ipsec.h @@ -117,6 +114,7 @@ header-y += netrom.h header-y += nfs2.h header-y += nfs4_mount.h header-y += nfs_mount.h +header-y += nl80211.h header-y += oom.h header-y += param.h header-y += pci_regs.h @@ -211,8 +209,10 @@ unifdef-y += hiddev.h unifdef-y += hpet.h unifdef-y += i2c.h unifdef-y += i2o-dev.h +unifdef-y += icmp.h unifdef-y += icmpv6.h unifdef-y += if_addr.h +unifdef-y += if_arp.h unifdef-y += if_bridge.h unifdef-y += if_ec.h unifdef-y += if_eql.h @@ -232,6 +232,7 @@ unifdef-y += inet_diag.h unifdef-y += in.h unifdef-y += inotify.h unifdef-y += input.h +unifdef-y += ip.h unifdef-y += ipc.h unifdef-y += ipmi.h unifdef-y += ipv6.h diff -puN include/linux/atalk.h~git-net include/linux/atalk.h --- a/include/linux/atalk.h~git-net +++ a/include/linux/atalk.h @@ -101,7 +101,7 @@ struct ddpehdr { static __inline__ struct ddpehdr *ddp_hdr(struct sk_buff *skb) { - return (struct ddpehdr *)skb->h.raw; + return (struct ddpehdr *)skb_transport_header(skb); } /* AppleTalk AARP headers */ @@ -129,7 +129,7 @@ struct elapaarp { static __inline__ struct elapaarp *aarp_hdr(struct sk_buff *skb) { - return (struct elapaarp *)skb->h.raw; + return (struct elapaarp *)skb_transport_header(skb); } /* Not specified - how long till we drop a resolved entry */ diff -puN include/linux/dccp.h~git-net include/linux/dccp.h --- a/include/linux/dccp.h~git-net +++ a/include/linux/dccp.h @@ -260,19 +260,20 @@ enum { static inline struct dccp_hdr *dccp_hdr(const struct sk_buff *skb) { - return (struct dccp_hdr *)skb->h.raw; + return (struct dccp_hdr *)skb_transport_header(skb); } static inline struct dccp_hdr *dccp_zeroed_hdr(struct sk_buff *skb, int headlen) { - skb->h.raw = skb_push(skb, headlen); - memset(skb->h.raw, 0, headlen); - return dccp_hdr(skb); + skb_push(skb, headlen); + skb_reset_transport_header(skb); + return memset(skb_transport_header(skb), 0, headlen); } static inline struct dccp_hdr_ext *dccp_hdrx(const struct sk_buff *skb) { - return (struct dccp_hdr_ext *)(skb->h.raw + sizeof(struct dccp_hdr)); + return (struct dccp_hdr_ext *)(skb_transport_header(skb) + + sizeof(struct dccp_hdr)); } static inline unsigned int __dccp_basic_hdr_len(const struct dccp_hdr *dh) @@ -301,12 +302,14 @@ static inline __u64 dccp_hdr_seq(const s static inline struct dccp_hdr_request *dccp_hdr_request(struct sk_buff *skb) { - return (struct dccp_hdr_request *)(skb->h.raw + dccp_basic_hdr_len(skb)); + return (struct dccp_hdr_request *)(skb_transport_header(skb) + + dccp_basic_hdr_len(skb)); } static inline struct dccp_hdr_ack_bits *dccp_hdr_ack_bits(const struct sk_buff *skb) { - return (struct dccp_hdr_ack_bits *)(skb->h.raw + dccp_basic_hdr_len(skb)); + return (struct dccp_hdr_ack_bits *)(skb_transport_header(skb) + + dccp_basic_hdr_len(skb)); } static inline u64 dccp_hdr_ack_seq(const struct sk_buff *skb) @@ -317,12 +320,14 @@ static inline u64 dccp_hdr_ack_seq(const static inline struct dccp_hdr_response *dccp_hdr_response(struct sk_buff *skb) { - return (struct dccp_hdr_response *)(skb->h.raw + dccp_basic_hdr_len(skb)); + return (struct dccp_hdr_response *)(skb_transport_header(skb) + + dccp_basic_hdr_len(skb)); } static inline struct dccp_hdr_reset *dccp_hdr_reset(struct sk_buff *skb) { - return (struct dccp_hdr_reset *)(skb->h.raw + dccp_basic_hdr_len(skb)); + return (struct dccp_hdr_reset *)(skb_transport_header(skb) + + dccp_basic_hdr_len(skb)); } static inline unsigned int __dccp_hdr_len(const struct dccp_hdr *dh) @@ -460,26 +465,27 @@ struct dccp_ackvec; * @dccps_service_list - second .. last service code on passive socket * @dccps_timestamp_time - time of latest TIMESTAMP option * @dccps_timestamp_echo - latest timestamp received on a TIMESTAMP option - * @dccps_l_ack_ratio - - * @dccps_r_ack_ratio - + * @dccps_l_ack_ratio - feature-local Ack Ratio + * @dccps_r_ack_ratio - feature-remote Ack Ratio * @dccps_pcslen - sender partial checksum coverage (via sockopt) * @dccps_pcrlen - receiver partial checksum coverage (via sockopt) * @dccps_ndp_count - number of Non Data Packets since last data packet - * @dccps_mss_cache - - * @dccps_minisock - + * @dccps_mss_cache - current value of MSS (path MTU minus header sizes) + * @dccps_minisock - associated minisock (accessed via dccp_msk) * @dccps_hc_rx_ackvec - rx half connection ack vector - * @dccps_hc_rx_ccid - - * @dccps_hc_tx_ccid - - * @dccps_options_received - - * @dccps_epoch - - * @dccps_role - Role of this sock, one of %dccp_role - * @dccps_hc_rx_insert_options - - * @dccps_hc_tx_insert_options - + * @dccps_hc_rx_ccid - CCID used for the receiver (or receiving half-connection) + * @dccps_hc_tx_ccid - CCID used for the sender (or sending half-connection) + * @dccps_options_received - parsed set of retrieved options + * @dccps_role - role of this sock, one of %dccp_role + * @dccps_hc_rx_insert_options - receiver wants to add options when acking + * @dccps_hc_tx_insert_options - sender wants to add options when sending * @dccps_xmit_timer - timer for when CCID is not ready to send + * @dccps_syn_rtt - RTT sample from Request/Response exchange (in usecs) */ struct dccp_sock { /* inet_connection_sock has to be the first member of dccp_sock */ struct inet_connection_sock dccps_inet_connection; +#define dccps_syn_rtt dccps_inet_connection.icsk_ack.lrcvtime __u64 dccps_swl; __u64 dccps_swh; __u64 dccps_awl; diff -puN include/linux/fib_rules.h~git-net include/linux/fib_rules.h --- a/include/linux/fib_rules.h~git-net +++ a/include/linux/fib_rules.h @@ -5,8 +5,13 @@ #include /* rule is permanent, and cannot be deleted */ -#define FIB_RULE_PERMANENT 1 -#define FIB_RULE_INVERT 2 +#define FIB_RULE_PERMANENT 0x00000001 +#define FIB_RULE_INVERT 0x00000002 +#define FIB_RULE_UNRESOLVED 0x00000004 +#define FIB_RULE_DEV_DETACHED 0x00000008 + +/* try to find source address in routing lookups */ +#define FIB_RULE_FIND_SADDR 0x00010000 struct fib_rule_hdr { @@ -29,7 +34,7 @@ enum FRA_DST, /* destination address */ FRA_SRC, /* source address */ FRA_IFNAME, /* interface name */ - FRA_UNUSED1, + FRA_GOTO, /* target to jump to (FR_ACT_GOTO) */ FRA_UNUSED2, FRA_PRIORITY, /* priority/preference */ FRA_UNUSED3, @@ -51,8 +56,8 @@ enum { FR_ACT_UNSPEC, FR_ACT_TO_TBL, /* Pass to fixed table */ - FR_ACT_RES1, - FR_ACT_RES2, + FR_ACT_GOTO, /* Jump to another rule */ + FR_ACT_NOP, /* No operation */ FR_ACT_RES3, FR_ACT_RES4, FR_ACT_BLACKHOLE, /* Drop without notification */ diff -puN include/linux/hdlc.h~git-net include/linux/hdlc.h --- a/include/linux/hdlc.h~git-net +++ a/include/linux/hdlc.h @@ -132,8 +132,8 @@ static __inline__ __be16 hdlc_type_trans { hdlc_device *hdlc = dev_to_hdlc(dev); - skb->mac.raw = skb->data; - skb->dev = dev; + skb->dev = dev; + skb_reset_mac_header(skb); if (hdlc->proto->type_trans) return hdlc->proto->type_trans(skb, dev); diff -puN include/linux/icmp.h~git-net include/linux/icmp.h --- a/include/linux/icmp.h~git-net +++ a/include/linux/icmp.h @@ -82,6 +82,15 @@ struct icmphdr { } un; }; +#ifdef __KERNEL__ +#include + +static inline struct icmphdr *icmp_hdr(const struct sk_buff *skb) +{ + return (struct icmphdr *)skb_transport_header(skb); +} +#endif + /* * constants for (set|get)sockopt */ diff -puN include/linux/icmpv6.h~git-net include/linux/icmpv6.h --- a/include/linux/icmpv6.h~git-net +++ a/include/linux/icmpv6.h @@ -75,6 +75,15 @@ struct icmp6hdr { #define icmp6_router_pref icmp6_dataun.u_nd_ra.router_pref }; +#ifdef __KERNEL__ +#include + +static inline struct icmp6hdr *icmp6_hdr(const struct sk_buff *skb) +{ + return (struct icmp6hdr *)skb_transport_header(skb); +} +#endif + #define ICMPV6_ROUTER_PREF_LOW 0x3 #define ICMPV6_ROUTER_PREF_MEDIUM 0x0 #define ICMPV6_ROUTER_PREF_HIGH 0x1 diff -puN include/linux/if_addr.h~git-net include/linux/if_addr.h --- a/include/linux/if_addr.h~git-net +++ a/include/linux/if_addr.h @@ -39,6 +39,7 @@ enum #define IFA_F_TEMPORARY IFA_F_SECONDARY #define IFA_F_NODAD 0x02 +#define IFA_F_OPTIMISTIC 0x04 #define IFA_F_HOMEADDRESS 0x10 #define IFA_F_DEPRECATED 0x20 #define IFA_F_TENTATIVE 0x40 diff -puN include/linux/if_arp.h~git-net include/linux/if_arp.h --- a/include/linux/if_arp.h~git-net +++ a/include/linux/if_arp.h @@ -148,4 +148,13 @@ struct arphdr }; +#ifdef __KERNEL__ +#include + +static inline struct arphdr *arp_hdr(const struct sk_buff *skb) +{ + return (struct arphdr *)skb_network_header(skb); +} +#endif + #endif /* _LINUX_IF_ARP_H */ diff -puN include/linux/if_bridge.h~git-net include/linux/if_bridge.h --- a/include/linux/if_bridge.h~git-net +++ a/include/linux/if_bridge.h @@ -105,7 +105,8 @@ struct __fdb_entry #include extern void brioctl_set(int (*ioctl_hook)(unsigned int, void __user *)); -extern int (*br_handle_frame_hook)(struct net_bridge_port *p, struct sk_buff **pskb); +extern struct sk_buff *(*br_handle_frame_hook)(struct net_bridge_port *p, + struct sk_buff *skb); extern int (*br_should_route_hook)(struct sk_buff **pskb); #endif diff -puN include/linux/if_ether.h~git-net include/linux/if_ether.h --- a/include/linux/if_ether.h~git-net +++ a/include/linux/if_ether.h @@ -112,7 +112,7 @@ struct ethhdr { static inline struct ethhdr *eth_hdr(const struct sk_buff *skb) { - return (struct ethhdr *)skb->mac.raw; + return (struct ethhdr *)skb_mac_header(skb); } #ifdef CONFIG_SYSCTL diff -puN include/linux/if_link.h~git-net include/linux/if_link.h --- a/include/linux/if_link.h~git-net +++ a/include/linux/if_link.h @@ -126,6 +126,7 @@ enum IFLA_INET6_STATS, /* statistics */ IFLA_INET6_MCAST, /* MC things. What of them? */ IFLA_INET6_CACHEINFO, /* time values and max reasm size */ + IFLA_INET6_ICMP6STATS, /* statistics (icmpv6) */ __IFLA_INET6_MAX }; diff -puN include/linux/if_packet.h~git-net include/linux/if_packet.h --- a/include/linux/if_packet.h~git-net +++ a/include/linux/if_packet.h @@ -42,6 +42,7 @@ struct sockaddr_ll #define PACKET_STATISTICS 6 #define PACKET_COPY_THRESH 7 #define PACKET_AUXDATA 8 +#define PACKET_ORIGDEV 9 struct tpacket_stats { diff -puN include/linux/if_pppox.h~git-net include/linux/if_pppox.h --- a/include/linux/if_pppox.h~git-net +++ a/include/linux/if_pppox.h @@ -111,7 +111,17 @@ struct pppoe_hdr { struct pppoe_tag tag[0]; } __attribute__ ((packed)); +/* Length of entire PPPoE + PPP header */ +#define PPPOE_SES_HLEN 8 + #ifdef __KERNEL__ +#include + +static inline struct pppoe_hdr *pppoe_hdr(const struct sk_buff *skb) +{ + return (struct pppoe_hdr *)skb_network_header(skb); +} + struct pppoe_opt { struct net_device *dev; /* device associated with socket*/ int ifindex; /* ifindex of device associated with socket */ diff -puN include/linux/if_tr.h~git-net include/linux/if_tr.h --- a/include/linux/if_tr.h~git-net +++ a/include/linux/if_tr.h @@ -47,7 +47,7 @@ struct trh_hdr { static inline struct trh_hdr *tr_hdr(const struct sk_buff *skb) { - return (struct trh_hdr *)skb->mac.raw; + return (struct trh_hdr *)skb_mac_header(skb); } #ifdef CONFIG_SYSCTL extern struct ctl_table tr_table[]; diff -puN include/linux/if_vlan.h~git-net include/linux/if_vlan.h --- a/include/linux/if_vlan.h~git-net +++ a/include/linux/if_vlan.h @@ -51,7 +51,7 @@ struct vlan_ethhdr { static inline struct vlan_ethhdr *vlan_eth_hdr(const struct sk_buff *skb) { - return (struct vlan_ethhdr *)skb->mac.raw; + return (struct vlan_ethhdr *)skb_mac_header(skb); } struct vlan_hdr { @@ -275,8 +275,8 @@ static inline struct sk_buff *__vlan_put veth->h_vlan_TCI = htons(tag); skb->protocol = __constant_htons(ETH_P_8021Q); - skb->mac.raw -= VLAN_HLEN; - skb->nh.raw -= VLAN_HLEN; + skb->mac_header -= VLAN_HLEN; + skb->network_header -= VLAN_HLEN; return skb; } diff -puN include/linux/igmp.h~git-net include/linux/igmp.h --- a/include/linux/igmp.h~git-net +++ a/include/linux/igmp.h @@ -80,6 +80,27 @@ struct igmpv3_query { __be32 srcs[0]; }; +#ifdef __KERNEL__ +#include + +static inline struct igmphdr *igmp_hdr(const struct sk_buff *skb) +{ + return (struct igmphdr *)skb_transport_header(skb); +} + +static inline struct igmpv3_report * + igmpv3_report_hdr(const struct sk_buff *skb) +{ + return (struct igmpv3_report *)skb_transport_header(skb); +} + +static inline struct igmpv3_query * + igmpv3_query_hdr(const struct sk_buff *skb) +{ + return (struct igmpv3_query *)skb_transport_header(skb); +} +#endif + #define IGMP_HOST_MEMBERSHIP_QUERY 0x11 /* From RFC1112 */ #define IGMP_HOST_MEMBERSHIP_REPORT 0x12 /* Ditto */ #define IGMP_DVMRP 0x13 /* DVMRP routing */ diff -puN include/linux/in.h~git-net include/linux/in.h --- a/include/linux/in.h~git-net +++ a/include/linux/in.h @@ -83,6 +83,7 @@ struct in_addr { #define IP_PMTUDISC_DONT 0 /* Never send DF frames */ #define IP_PMTUDISC_WANT 1 /* Use per route hints */ #define IP_PMTUDISC_DO 2 /* Always DF */ +#define IP_PMTUDISC_PROBE 3 /* Ignore dst pmtu */ #define IP_MULTICAST_IF 32 #define IP_MULTICAST_TTL 33 diff -puN include/linux/in6.h~git-net include/linux/in6.h --- a/include/linux/in6.h~git-net +++ a/include/linux/in6.h @@ -179,6 +179,7 @@ struct in6_flowlabel_req #define IPV6_PMTUDISC_DONT 0 #define IPV6_PMTUDISC_WANT 1 #define IPV6_PMTUDISC_DO 2 +#define IPV6_PMTUDISC_PROBE 3 /* Flowlabel */ #define IPV6_FLOWLABEL_MGR 32 diff -puN include/linux/ip.h~git-net include/linux/ip.h --- a/include/linux/ip.h~git-net +++ a/include/linux/ip.h @@ -104,6 +104,20 @@ struct iphdr { /*The options start here. */ }; +#ifdef __KERNEL__ +#include + +static inline struct iphdr *ip_hdr(const struct sk_buff *skb) +{ + return (struct iphdr *)skb_network_header(skb); +} + +static inline struct iphdr *ipip_hdr(const struct sk_buff *skb) +{ + return (struct iphdr *)skb_transport_header(skb); +} +#endif + struct ip_auth_hdr { __u8 nexthdr; __u8 hdrlen; /* This one is measured in 32 bit units! */ diff -puN include/linux/ipv6.h~git-net include/linux/ipv6.h --- a/include/linux/ipv6.h~git-net +++ a/include/linux/ipv6.h @@ -177,6 +177,9 @@ struct ipv6_devconf { #endif #endif __s32 proxy_ndp; +#ifdef CONFIG_IPV6_OPTIMISTIC_DAD + __s32 optimistic_dad; +#endif void *sysctl; }; @@ -205,6 +208,7 @@ enum { DEVCONF_RTR_PROBE_INTERVAL, DEVCONF_ACCEPT_RA_RT_INFO_MAX_PLEN, DEVCONF_PROXY_NDP, + DEVCONF_OPTIMISTIC_DAD, DEVCONF_MAX }; @@ -216,6 +220,16 @@ enum { #include /* struct ipv6_mc_socklist */ #include +static inline struct ipv6hdr *ipv6_hdr(const struct sk_buff *skb) +{ + return (struct ipv6hdr *)skb_network_header(skb); +} + +static inline struct ipv6hdr *ipipv6_hdr(const struct sk_buff *skb) +{ + return (struct ipv6hdr *)skb_transport_header(skb); +} + /* This structure contains results of exthdrs parsing as offsets from skb->nh. diff -puN include/linux/jhash.h~git-net include/linux/jhash.h --- a/include/linux/jhash.h~git-net +++ a/include/linux/jhash.h @@ -84,7 +84,7 @@ static inline u32 jhash(const void *key, /* A special optimized version that handles 1 or more of u32s. * The length parameter here is the number of u32s in the key. */ -static inline u32 jhash2(u32 *k, u32 length, u32 initval) +static inline u32 jhash2(const u32 *k, u32 length, u32 initval) { u32 a, b, c, len; diff -puN include/linux/netdevice.h~git-net include/linux/netdevice.h --- a/include/linux/netdevice.h~git-net +++ a/include/linux/netdevice.h @@ -42,6 +42,8 @@ struct vlan_group; struct ethtool_ops; struct netpoll_info; +/* 802.11 specific */ +struct wireless_dev; /* source back-compat hooks */ #define SET_ETHTOOL_OPS(netdev,ops) \ ( (netdev)->ethtool_ops = (ops) ) @@ -323,6 +325,7 @@ struct net_device #define NETIF_F_VLAN_CHALLENGED 1024 /* Device cannot handle VLAN packets */ #define NETIF_F_GSO 2048 /* Enable software GSO. */ #define NETIF_F_LLTX 4096 /* LockLess TX */ +#define NETIF_F_INTERNAL_STATS 8192 /* Use stats structure in net_device */ /* Segmentation offload features */ #define NETIF_F_GSO_SHIFT 16 @@ -347,6 +350,7 @@ struct net_device struct net_device_stats* (*get_stats)(struct net_device *dev); + struct net_device_stats stats; /* List of functions to handle Wireless Extensions (instead of ioctl). * See for details. Jean II */ @@ -398,6 +402,8 @@ struct net_device void *ip6_ptr; /* IPv6 specific data */ void *ec_ptr; /* Econet specific data */ void *ax25_ptr; /* AX.25 specific data */ + struct wireless_dev *ieee80211_ptr; /* IEEE 802.11 specific data, + assign before registering */ /* * Cache line mostly used on receive path (including eth_type_trans()) diff -puN include/linux/netfilter.h~git-net include/linux/netfilter.h --- a/include/linux/netfilter.h~git-net +++ a/include/linux/netfilter.h @@ -281,9 +281,6 @@ extern void nf_reinject(struct sk_buff * struct nf_info *info, unsigned int verdict); -extern void (*ip_ct_attach)(struct sk_buff *, struct sk_buff *); -extern void nf_ct_attach(struct sk_buff *, struct sk_buff *); - /* FIXME: Before cache is ever used, this must be implemented for real. */ extern void nf_invalidate_cache(int pf); @@ -388,11 +385,18 @@ static inline int nf_hook(int pf, unsign { return 1; } -static inline void nf_ct_attach(struct sk_buff *new, struct sk_buff *skb) {} struct flowi; static inline void nf_nat_decode_session(struct sk_buff *skb, struct flowi *fl, int family) {} #endif /*CONFIG_NETFILTER*/ +#if defined(CONFIG_NF_CONNTRACK) || defined(CONFIG_NF_CONNTRACK_MODULE) +extern void (*ip_ct_attach)(struct sk_buff *, struct sk_buff *); +extern void nf_ct_attach(struct sk_buff *, struct sk_buff *); +extern void (*nf_ct_destroy)(struct nf_conntrack *); +#else +static inline void nf_ct_attach(struct sk_buff *new, struct sk_buff *skb) {} +#endif + #endif /*__KERNEL__*/ #endif /*__LINUX_NETFILTER_H*/ diff -puN include/linux/netfilter/nf_conntrack_tcp.h~git-net include/linux/netfilter/nf_conntrack_tcp.h --- a/include/linux/netfilter/nf_conntrack_tcp.h~git-net +++ a/include/linux/netfilter/nf_conntrack_tcp.h @@ -30,6 +30,11 @@ enum tcp_conntrack { /* Be liberal in window checking */ #define IP_CT_TCP_FLAG_BE_LIBERAL 0x08 +struct nf_ct_tcp_flags { + u_int8_t flags; + u_int8_t mask; +}; + #ifdef __KERNEL__ struct ip_ct_tcp_state { diff -puN include/linux/netfilter/nfnetlink.h~git-net include/linux/netfilter/nfnetlink.h --- a/include/linux/netfilter/nfnetlink.h~git-net +++ a/include/linux/netfilter/nfnetlink.h @@ -62,11 +62,11 @@ struct nfattr #define NFA_DATA(nfa) ((void *)(((char *)(nfa)) + NFA_LENGTH(0))) #define NFA_PAYLOAD(nfa) ((int)((nfa)->nfa_len) - NFA_LENGTH(0)) #define NFA_NEST(skb, type) \ -({ struct nfattr *__start = (struct nfattr *) (skb)->tail; \ +({ struct nfattr *__start = (struct nfattr *)skb_tail_pointer(skb); \ NFA_PUT(skb, (NFNL_NFA_NEST | type), 0, NULL); \ __start; }) #define NFA_NEST_END(skb, start) \ -({ (start)->nfa_len = ((skb)->tail - (unsigned char *) (start)); \ +({ (start)->nfa_len = skb_tail_pointer(skb) - (unsigned char *)(start); \ (skb)->len; }) #define NFA_NEST_CANCEL(skb, start) \ ({ if (start) \ @@ -111,7 +111,7 @@ struct nfgenmsg { struct nfnl_callback { int (*call)(struct sock *nl, struct sk_buff *skb, - struct nlmsghdr *nlh, struct nfattr *cda[], int *errp); + struct nlmsghdr *nlh, struct nfattr *cda[]); u_int16_t attr_count; /* number of nfattr's */ }; @@ -129,19 +129,6 @@ extern void __nfa_fill(struct sk_buff *s ({ if (skb_tailroom(skb) < (int)NFA_SPACE(attrlen)) goto nfattr_failure; \ __nfa_fill(skb, attrtype, attrlen, data); }) -extern struct semaphore nfnl_sem; - -#define nfnl_shlock() down(&nfnl_sem) -#define nfnl_shlock_nowait() down_trylock(&nfnl_sem) - -#define nfnl_shunlock() do { up(&nfnl_sem); \ - if(nfnl && nfnl->sk_receive_queue.qlen) \ - nfnl->sk_data_ready(nfnl, 0); \ - } while(0) - -extern void nfnl_lock(void); -extern void nfnl_unlock(void); - extern int nfnetlink_subsys_register(struct nfnetlink_subsystem *n); extern int nfnetlink_subsys_unregister(struct nfnetlink_subsystem *n); diff -puN include/linux/netfilter/nfnetlink_conntrack.h~git-net include/linux/netfilter/nfnetlink_conntrack.h --- a/include/linux/netfilter/nfnetlink_conntrack.h~git-net +++ a/include/linux/netfilter/nfnetlink_conntrack.h @@ -83,6 +83,10 @@ enum ctattr_protoinfo { enum ctattr_protoinfo_tcp { CTA_PROTOINFO_TCP_UNSPEC, CTA_PROTOINFO_TCP_STATE, + CTA_PROTOINFO_TCP_WSCALE_ORIGINAL, + CTA_PROTOINFO_TCP_WSCALE_REPLY, + CTA_PROTOINFO_TCP_FLAGS_ORIGINAL, + CTA_PROTOINFO_TCP_FLAGS_REPLY, __CTA_PROTOINFO_TCP_MAX }; #define CTA_PROTOINFO_TCP_MAX (__CTA_PROTOINFO_TCP_MAX - 1) diff -puN include/linux/netfilter_bridge.h~git-net include/linux/netfilter_bridge.h --- a/include/linux/netfilter_bridge.h~git-net +++ a/include/linux/netfilter_bridge.h @@ -7,6 +7,7 @@ #include #include #include +#include /* Bridge Hooks */ /* After promisc drops, checksum checks. */ @@ -58,8 +59,14 @@ static inline int nf_bridge_maybe_copy_h * enough room for the encapsulating header (if there is one). */ static inline int nf_bridge_pad(const struct sk_buff *skb) { - return (skb->nf_bridge && skb->protocol == htons(ETH_P_8021Q)) - ? VLAN_HLEN : 0; + int padding = 0; + + if (skb->nf_bridge && skb->protocol == htons(ETH_P_8021Q)) + padding = VLAN_HLEN; + else if (skb->nf_bridge && skb->protocol == htons(ETH_P_PPP_SES)) + padding = PPPOE_SES_HLEN; + + return padding; } struct bridge_skb_cb { diff -puN include/linux/netfilter_bridge/ebt_802_3.h~git-net include/linux/netfilter_bridge/ebt_802_3.h --- a/include/linux/netfilter_bridge/ebt_802_3.h~git-net +++ a/include/linux/netfilter_bridge/ebt_802_3.h @@ -54,7 +54,7 @@ struct ebt_802_3_hdr { static inline struct ebt_802_3_hdr *ebt_802_3_hdr(const struct sk_buff *skb) { - return (struct ebt_802_3_hdr *)skb->mac.raw; + return (struct ebt_802_3_hdr *)skb_mac_header(skb); } #endif diff -puN include/linux/netfilter_bridge/ebt_arp.h~git-net include/linux/netfilter_bridge/ebt_arp.h --- a/include/linux/netfilter_bridge/ebt_arp.h~git-net +++ a/include/linux/netfilter_bridge/ebt_arp.h @@ -8,8 +8,10 @@ #define EBT_ARP_DST_IP 0x10 #define EBT_ARP_SRC_MAC 0x20 #define EBT_ARP_DST_MAC 0x40 +#define EBT_ARP_GRAT 0x80 #define EBT_ARP_MASK (EBT_ARP_OPCODE | EBT_ARP_HTYPE | EBT_ARP_PTYPE | \ - EBT_ARP_SRC_IP | EBT_ARP_DST_IP | EBT_ARP_SRC_MAC | EBT_ARP_DST_MAC) + EBT_ARP_SRC_IP | EBT_ARP_DST_IP | EBT_ARP_SRC_MAC | EBT_ARP_DST_MAC | \ + EBT_ARP_GRAT) #define EBT_ARP_MATCH "arp" struct ebt_arp_info diff -puN include/linux/netfilter_ipv4/Kbuild~git-net include/linux/netfilter_ipv4/Kbuild --- a/include/linux/netfilter_ipv4/Kbuild~git-net +++ a/include/linux/netfilter_ipv4/Kbuild @@ -1,9 +1,3 @@ -header-y += ip_conntrack_helper.h -header-y += ip_conntrack_protocol.h -header-y += ip_conntrack_sctp.h -header-y += ip_conntrack_tcp.h -header-y += ip_conntrack_tftp.h -header-y += ip_nat_pptp.h header-y += ipt_addrtype.h header-y += ipt_ah.h header-y += ipt_CLASSIFY.h @@ -49,13 +43,5 @@ header-y += ipt_ttl.h header-y += ipt_TTL.h header-y += ipt_ULOG.h -unifdef-y += ip_conntrack.h -unifdef-y += ip_conntrack_h323.h -unifdef-y += ip_conntrack_irc.h -unifdef-y += ip_conntrack_pptp.h -unifdef-y += ip_conntrack_proto_gre.h -unifdef-y += ip_conntrack_tuple.h -unifdef-y += ip_nat.h -unifdef-y += ip_nat_rule.h unifdef-y += ip_queue.h unifdef-y += ip_tables.h diff -puN include/linux/netfilter_ipv4/ip_conntrack.h~git-net /dev/null --- a/include/linux/netfilter_ipv4/ip_conntrack.h +++ /dev/null @@ -1,402 +0,0 @@ -#ifndef _IP_CONNTRACK_H -#define _IP_CONNTRACK_H - -#include - -#ifdef __KERNEL__ -#include -#include -#include -#include - -#include -#include -#include -#include -#include - -/* per conntrack: protocol private data */ -union ip_conntrack_proto { - /* insert conntrack proto private data here */ - struct ip_ct_gre gre; - struct ip_ct_sctp sctp; - struct ip_ct_tcp tcp; - struct ip_ct_icmp icmp; -}; - -union ip_conntrack_expect_proto { - /* insert expect proto private data here */ -}; - -/* Add protocol helper include file here */ -#include -#include -#include -#include -#include - -/* per conntrack: application helper private data */ -union ip_conntrack_help { - /* insert conntrack helper private data (master) here */ - struct ip_ct_h323_master ct_h323_info; - struct ip_ct_pptp_master ct_pptp_info; - struct ip_ct_ftp_master ct_ftp_info; - struct ip_ct_irc_master ct_irc_info; -}; - -#ifdef CONFIG_IP_NF_NAT_NEEDED -#include -#include - -/* per conntrack: nat application helper private data */ -union ip_conntrack_nat_help { - /* insert nat helper private data here */ - struct ip_nat_pptp nat_pptp_info; -}; -#endif - -#include -#include - -#ifdef CONFIG_NETFILTER_DEBUG -#define IP_NF_ASSERT(x) \ -do { \ - if (!(x)) \ - /* Wooah! I'm tripping my conntrack in a frenzy of \ - netplay... */ \ - printk("NF_IP_ASSERT: %s:%i(%s)\n", \ - __FILE__, __LINE__, __FUNCTION__); \ -} while(0) -#else -#define IP_NF_ASSERT(x) -#endif - -struct ip_conntrack_helper; - -struct ip_conntrack -{ - /* Usage count in here is 1 for hash table/destruct timer, 1 per skb, - plus 1 for any connection(s) we are `master' for */ - struct nf_conntrack ct_general; - - /* Have we seen traffic both ways yet? (bitset) */ - unsigned long status; - - /* Timer function; drops refcnt when it goes off. */ - struct timer_list timeout; - -#ifdef CONFIG_IP_NF_CT_ACCT - /* Accounting Information (same cache line as other written members) */ - struct ip_conntrack_counter counters[IP_CT_DIR_MAX]; -#endif - /* If we were expected by an expectation, this will be it */ - struct ip_conntrack *master; - - /* Current number of expected connections */ - unsigned int expecting; - - /* Unique ID that identifies this conntrack*/ - unsigned int id; - - /* Helper, if any. */ - struct ip_conntrack_helper *helper; - - /* Storage reserved for other modules: */ - union ip_conntrack_proto proto; - - union ip_conntrack_help help; - -#ifdef CONFIG_IP_NF_NAT_NEEDED - struct { - struct ip_nat_info info; - union ip_conntrack_nat_help help; -#if defined(CONFIG_IP_NF_TARGET_MASQUERADE) || \ - defined(CONFIG_IP_NF_TARGET_MASQUERADE_MODULE) - int masq_index; -#endif - } nat; -#endif /* CONFIG_IP_NF_NAT_NEEDED */ - -#if defined(CONFIG_IP_NF_CONNTRACK_MARK) - u_int32_t mark; -#endif - -#ifdef CONFIG_IP_NF_CONNTRACK_SECMARK - u_int32_t secmark; -#endif - - /* Traversed often, so hopefully in different cacheline to top */ - /* These are my tuples; original and reply */ - struct ip_conntrack_tuple_hash tuplehash[IP_CT_DIR_MAX]; -}; - -struct ip_conntrack_expect -{ - /* Internal linked list (global expectation list) */ - struct list_head list; - - /* We expect this tuple, with the following mask */ - struct ip_conntrack_tuple tuple, mask; - - /* Function to call after setup and insertion */ - void (*expectfn)(struct ip_conntrack *new, - struct ip_conntrack_expect *this); - - /* The conntrack of the master connection */ - struct ip_conntrack *master; - - /* Timer function; deletes the expectation. */ - struct timer_list timeout; - - /* Usage count. */ - atomic_t use; - - /* Unique ID */ - unsigned int id; - - /* Flags */ - unsigned int flags; - -#ifdef CONFIG_IP_NF_NAT_NEEDED - __be32 saved_ip; - /* This is the original per-proto part, used to map the - * expected connection the way the recipient expects. */ - union ip_conntrack_manip_proto saved_proto; - /* Direction relative to the master connection. */ - enum ip_conntrack_dir dir; -#endif -}; - -#define IP_CT_EXPECT_PERMANENT 0x1 - -static inline struct ip_conntrack * -tuplehash_to_ctrack(const struct ip_conntrack_tuple_hash *hash) -{ - return container_of(hash, struct ip_conntrack, - tuplehash[hash->tuple.dst.dir]); -} - -/* get master conntrack via master expectation */ -#define master_ct(conntr) (conntr->master) - -/* Alter reply tuple (maybe alter helper). */ -extern void -ip_conntrack_alter_reply(struct ip_conntrack *conntrack, - const struct ip_conntrack_tuple *newreply); - -/* Is this tuple taken? (ignoring any belonging to the given - conntrack). */ -extern int -ip_conntrack_tuple_taken(const struct ip_conntrack_tuple *tuple, - const struct ip_conntrack *ignored_conntrack); - -/* Return conntrack_info and tuple hash for given skb. */ -static inline struct ip_conntrack * -ip_conntrack_get(const struct sk_buff *skb, enum ip_conntrack_info *ctinfo) -{ - *ctinfo = skb->nfctinfo; - return (struct ip_conntrack *)skb->nfct; -} - -/* decrement reference count on a conntrack */ -static inline void -ip_conntrack_put(struct ip_conntrack *ct) -{ - IP_NF_ASSERT(ct); - nf_conntrack_put(&ct->ct_general); -} - -extern int invert_tuplepr(struct ip_conntrack_tuple *inverse, - const struct ip_conntrack_tuple *orig); - -extern void __ip_ct_refresh_acct(struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - const struct sk_buff *skb, - unsigned long extra_jiffies, - int do_acct); - -/* Refresh conntrack for this many jiffies and do accounting */ -static inline void ip_ct_refresh_acct(struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - const struct sk_buff *skb, - unsigned long extra_jiffies) -{ - __ip_ct_refresh_acct(ct, ctinfo, skb, extra_jiffies, 1); -} - -/* Refresh conntrack for this many jiffies */ -static inline void ip_ct_refresh(struct ip_conntrack *ct, - const struct sk_buff *skb, - unsigned long extra_jiffies) -{ - __ip_ct_refresh_acct(ct, 0, skb, extra_jiffies, 0); -} - -/* These are for NAT. Icky. */ -/* Update TCP window tracking data when NAT mangles the packet */ -extern void ip_conntrack_tcp_update(struct sk_buff *skb, - struct ip_conntrack *conntrack, - enum ip_conntrack_dir dir); - -/* Call me when a conntrack is destroyed. */ -extern void (*ip_conntrack_destroyed)(struct ip_conntrack *conntrack); - -/* Fake conntrack entry for untracked connections */ -extern struct ip_conntrack ip_conntrack_untracked; - -/* Returns new sk_buff, or NULL */ -struct sk_buff * -ip_ct_gather_frags(struct sk_buff *skb, u_int32_t user); - -/* Iterate over all conntracks: if iter returns true, it's deleted. */ -extern void -ip_ct_iterate_cleanup(int (*iter)(struct ip_conntrack *i, void *data), - void *data); - -extern struct ip_conntrack_helper * -__ip_conntrack_helper_find_byname(const char *); -extern struct ip_conntrack_helper * -ip_conntrack_helper_find_get(const struct ip_conntrack_tuple *tuple); -extern void ip_conntrack_helper_put(struct ip_conntrack_helper *helper); - -extern struct ip_conntrack_protocol * -__ip_conntrack_proto_find(u_int8_t protocol); -extern struct ip_conntrack_protocol * -ip_conntrack_proto_find_get(u_int8_t protocol); -extern void ip_conntrack_proto_put(struct ip_conntrack_protocol *proto); - -extern void ip_ct_remove_expectations(struct ip_conntrack *ct); - -extern struct ip_conntrack *ip_conntrack_alloc(struct ip_conntrack_tuple *, - struct ip_conntrack_tuple *); - -extern void ip_conntrack_free(struct ip_conntrack *ct); - -extern void ip_conntrack_hash_insert(struct ip_conntrack *ct); - -extern struct ip_conntrack_expect * -__ip_conntrack_expect_find(const struct ip_conntrack_tuple *tuple); - -extern struct ip_conntrack_expect * -ip_conntrack_expect_find_get(const struct ip_conntrack_tuple *tuple); - -extern struct ip_conntrack_tuple_hash * -__ip_conntrack_find(const struct ip_conntrack_tuple *tuple, - const struct ip_conntrack *ignored_conntrack); - -extern void ip_conntrack_flush(void); - -/* It's confirmed if it is, or has been in the hash table. */ -static inline int is_confirmed(struct ip_conntrack *ct) -{ - return test_bit(IPS_CONFIRMED_BIT, &ct->status); -} - -static inline int is_dying(struct ip_conntrack *ct) -{ - return test_bit(IPS_DYING_BIT, &ct->status); -} - -extern unsigned int ip_conntrack_htable_size; -extern int ip_conntrack_checksum; - -#define CONNTRACK_STAT_INC(count) (__get_cpu_var(ip_conntrack_stat).count++) -#define CONNTRACK_STAT_INC_ATOMIC(count) \ -do { \ - local_bh_disable(); \ - __get_cpu_var(ip_conntrack_stat).count++; \ - local_bh_enable(); \ -} while (0) - -#ifdef CONFIG_IP_NF_CONNTRACK_EVENTS -#include -#include - -struct ip_conntrack_ecache { - struct ip_conntrack *ct; - unsigned int events; -}; -DECLARE_PER_CPU(struct ip_conntrack_ecache, ip_conntrack_ecache); - -#define CONNTRACK_ECACHE(x) (__get_cpu_var(ip_conntrack_ecache).x) - -extern struct atomic_notifier_head ip_conntrack_chain; -extern struct atomic_notifier_head ip_conntrack_expect_chain; - -static inline int ip_conntrack_register_notifier(struct notifier_block *nb) -{ - return atomic_notifier_chain_register(&ip_conntrack_chain, nb); -} - -static inline int ip_conntrack_unregister_notifier(struct notifier_block *nb) -{ - return atomic_notifier_chain_unregister(&ip_conntrack_chain, nb); -} - -static inline int -ip_conntrack_expect_register_notifier(struct notifier_block *nb) -{ - return atomic_notifier_chain_register(&ip_conntrack_expect_chain, nb); -} - -static inline int -ip_conntrack_expect_unregister_notifier(struct notifier_block *nb) -{ - return atomic_notifier_chain_unregister(&ip_conntrack_expect_chain, - nb); -} - -extern void ip_ct_deliver_cached_events(const struct ip_conntrack *ct); -extern void __ip_ct_event_cache_init(struct ip_conntrack *ct); - -static inline void -ip_conntrack_event_cache(enum ip_conntrack_events event, - const struct sk_buff *skb) -{ - struct ip_conntrack *ct = (struct ip_conntrack *)skb->nfct; - struct ip_conntrack_ecache *ecache; - - local_bh_disable(); - ecache = &__get_cpu_var(ip_conntrack_ecache); - if (ct != ecache->ct) - __ip_ct_event_cache_init(ct); - ecache->events |= event; - local_bh_enable(); -} - -static inline void ip_conntrack_event(enum ip_conntrack_events event, - struct ip_conntrack *ct) -{ - if (is_confirmed(ct) && !is_dying(ct)) - atomic_notifier_call_chain(&ip_conntrack_chain, event, ct); -} - -static inline void -ip_conntrack_expect_event(enum ip_conntrack_expect_events event, - struct ip_conntrack_expect *exp) -{ - atomic_notifier_call_chain(&ip_conntrack_expect_chain, event, exp); -} -#else /* CONFIG_IP_NF_CONNTRACK_EVENTS */ -static inline void ip_conntrack_event_cache(enum ip_conntrack_events event, - const struct sk_buff *skb) {} -static inline void ip_conntrack_event(enum ip_conntrack_events event, - struct ip_conntrack *ct) {} -static inline void ip_ct_deliver_cached_events(const struct ip_conntrack *ct) {} -static inline void -ip_conntrack_expect_event(enum ip_conntrack_expect_events event, - struct ip_conntrack_expect *exp) {} -#endif /* CONFIG_IP_NF_CONNTRACK_EVENTS */ - -#ifdef CONFIG_IP_NF_NAT_NEEDED -static inline int ip_nat_initialized(struct ip_conntrack *conntrack, - enum ip_nat_manip_type manip) -{ - if (manip == IP_NAT_MANIP_SRC) - return test_bit(IPS_SRC_NAT_DONE_BIT, &conntrack->status); - return test_bit(IPS_DST_NAT_DONE_BIT, &conntrack->status); -} -#endif /* CONFIG_IP_NF_NAT_NEEDED */ - -#endif /* __KERNEL__ */ -#endif /* _IP_CONNTRACK_H */ diff -puN include/linux/netfilter_ipv4/ip_conntrack_amanda.h~git-net /dev/null --- a/include/linux/netfilter_ipv4/ip_conntrack_amanda.h +++ /dev/null @@ -1,11 +0,0 @@ -#ifndef _IP_CONNTRACK_AMANDA_H -#define _IP_CONNTRACK_AMANDA_H -/* AMANDA tracking. */ - -struct ip_conntrack_expect; -extern unsigned int (*ip_nat_amanda_hook)(struct sk_buff **pskb, - enum ip_conntrack_info ctinfo, - unsigned int matchoff, - unsigned int matchlen, - struct ip_conntrack_expect *exp); -#endif /* _IP_CONNTRACK_AMANDA_H */ diff -puN include/linux/netfilter_ipv4/ip_conntrack_core.h~git-net /dev/null --- a/include/linux/netfilter_ipv4/ip_conntrack_core.h +++ /dev/null @@ -1,61 +0,0 @@ -#ifndef _IP_CONNTRACK_CORE_H -#define _IP_CONNTRACK_CORE_H -#include - -#define MAX_IP_CT_PROTO 256 -extern struct ip_conntrack_protocol *ip_ct_protos[MAX_IP_CT_PROTO]; - -/* This header is used to share core functionality between the - standalone connection tracking module, and the compatibility layer's use - of connection tracking. */ -extern unsigned int ip_conntrack_in(unsigned int hooknum, - struct sk_buff **pskb, - const struct net_device *in, - const struct net_device *out, - int (*okfn)(struct sk_buff *)); - -extern int ip_conntrack_init(void); -extern void ip_conntrack_cleanup(void); - -struct ip_conntrack_protocol; - -extern int -ip_ct_get_tuple(const struct iphdr *iph, - const struct sk_buff *skb, - unsigned int dataoff, - struct ip_conntrack_tuple *tuple, - const struct ip_conntrack_protocol *protocol); - -extern int -ip_ct_invert_tuple(struct ip_conntrack_tuple *inverse, - const struct ip_conntrack_tuple *orig, - const struct ip_conntrack_protocol *protocol); - -/* Find a connection corresponding to a tuple. */ -struct ip_conntrack_tuple_hash * -ip_conntrack_find_get(const struct ip_conntrack_tuple *tuple, - const struct ip_conntrack *ignored_conntrack); - -extern int __ip_conntrack_confirm(struct sk_buff **pskb); - -/* Confirm a connection: returns NF_DROP if packet must be dropped. */ -static inline int ip_conntrack_confirm(struct sk_buff **pskb) -{ - struct ip_conntrack *ct = (struct ip_conntrack *)(*pskb)->nfct; - int ret = NF_ACCEPT; - - if (ct) { - if (!is_confirmed(ct) && !is_dying(ct)) - ret = __ip_conntrack_confirm(pskb); - ip_ct_deliver_cached_events(ct); - } - return ret; -} - -extern void ip_ct_unlink_expect(struct ip_conntrack_expect *exp); - -extern struct list_head *ip_conntrack_hash; -extern struct list_head ip_conntrack_expect_list; -extern rwlock_t ip_conntrack_lock; -#endif /* _IP_CONNTRACK_CORE_H */ - diff -puN include/linux/netfilter_ipv4/ip_conntrack_ftp.h~git-net /dev/null --- a/include/linux/netfilter_ipv4/ip_conntrack_ftp.h +++ /dev/null @@ -1,44 +0,0 @@ -#ifndef _IP_CONNTRACK_FTP_H -#define _IP_CONNTRACK_FTP_H -/* FTP tracking. */ - -/* This enum is exposed to userspace */ -enum ip_ct_ftp_type -{ - /* PORT command from client */ - IP_CT_FTP_PORT, - /* PASV response from server */ - IP_CT_FTP_PASV, - /* EPRT command from client */ - IP_CT_FTP_EPRT, - /* EPSV response from server */ - IP_CT_FTP_EPSV, -}; - -#ifdef __KERNEL__ - -#define FTP_PORT 21 - -#define NUM_SEQ_TO_REMEMBER 2 -/* This structure exists only once per master */ -struct ip_ct_ftp_master { - /* Valid seq positions for cmd matching after newline */ - u_int32_t seq_aft_nl[IP_CT_DIR_MAX][NUM_SEQ_TO_REMEMBER]; - /* 0 means seq_match_aft_nl not set */ - int seq_aft_nl_num[IP_CT_DIR_MAX]; -}; - -struct ip_conntrack_expect; - -/* For NAT to hook in when we find a packet which describes what other - * connection we should expect. */ -extern unsigned int (*ip_nat_ftp_hook)(struct sk_buff **pskb, - enum ip_conntrack_info ctinfo, - enum ip_ct_ftp_type type, - unsigned int matchoff, - unsigned int matchlen, - struct ip_conntrack_expect *exp, - u32 *seq); -#endif /* __KERNEL__ */ - -#endif /* _IP_CONNTRACK_FTP_H */ diff -puN include/linux/netfilter_ipv4/ip_conntrack_h323.h~git-net /dev/null --- a/include/linux/netfilter_ipv4/ip_conntrack_h323.h +++ /dev/null @@ -1,89 +0,0 @@ -#ifndef _IP_CONNTRACK_H323_H -#define _IP_CONNTRACK_H323_H - -#ifdef __KERNEL__ - -#include - -#define RAS_PORT 1719 -#define Q931_PORT 1720 -#define H323_RTP_CHANNEL_MAX 4 /* Audio, video, FAX and other */ - -/* This structure exists only once per master */ -struct ip_ct_h323_master { - - /* Original and NATed Q.931 or H.245 signal ports */ - u_int16_t sig_port[IP_CT_DIR_MAX]; - - /* Original and NATed RTP ports */ - u_int16_t rtp_port[H323_RTP_CHANNEL_MAX][IP_CT_DIR_MAX]; - - union { - /* RAS connection timeout */ - u_int32_t timeout; - - /* Next TPKT length (for separate TPKT header and data) */ - u_int16_t tpkt_len[IP_CT_DIR_MAX]; - }; -}; - -struct ip_conntrack_expect; - -extern int get_h225_addr(unsigned char *data, TransportAddress * addr, - __be32 * ip, u_int16_t * port); -extern void ip_conntrack_h245_expect(struct ip_conntrack *new, - struct ip_conntrack_expect *this); -extern void ip_conntrack_q931_expect(struct ip_conntrack *new, - struct ip_conntrack_expect *this); -extern int (*set_h245_addr_hook) (struct sk_buff ** pskb, - unsigned char **data, int dataoff, - H245_TransportAddress * addr, - __be32 ip, u_int16_t port); -extern int (*set_h225_addr_hook) (struct sk_buff ** pskb, - unsigned char **data, int dataoff, - TransportAddress * addr, - __be32 ip, u_int16_t port); -extern int (*set_sig_addr_hook) (struct sk_buff ** pskb, - struct ip_conntrack * ct, - enum ip_conntrack_info ctinfo, - unsigned char **data, - TransportAddress * addr, int count); -extern int (*set_ras_addr_hook) (struct sk_buff ** pskb, - struct ip_conntrack * ct, - enum ip_conntrack_info ctinfo, - unsigned char **data, - TransportAddress * addr, int count); -extern int (*nat_rtp_rtcp_hook) (struct sk_buff ** pskb, - struct ip_conntrack * ct, - enum ip_conntrack_info ctinfo, - unsigned char **data, int dataoff, - H245_TransportAddress * addr, - u_int16_t port, u_int16_t rtp_port, - struct ip_conntrack_expect * rtp_exp, - struct ip_conntrack_expect * rtcp_exp); -extern int (*nat_t120_hook) (struct sk_buff ** pskb, struct ip_conntrack * ct, - enum ip_conntrack_info ctinfo, - unsigned char **data, int dataoff, - H245_TransportAddress * addr, u_int16_t port, - struct ip_conntrack_expect * exp); -extern int (*nat_h245_hook) (struct sk_buff ** pskb, struct ip_conntrack * ct, - enum ip_conntrack_info ctinfo, - unsigned char **data, int dataoff, - TransportAddress * addr, u_int16_t port, - struct ip_conntrack_expect * exp); -extern int (*nat_callforwarding_hook) (struct sk_buff ** pskb, - struct ip_conntrack * ct, - enum ip_conntrack_info ctinfo, - unsigned char **data, int dataoff, - TransportAddress * addr, - u_int16_t port, - struct ip_conntrack_expect * exp); -extern int (*nat_q931_hook) (struct sk_buff ** pskb, struct ip_conntrack * ct, - enum ip_conntrack_info ctinfo, - unsigned char **data, TransportAddress * addr, - int idx, u_int16_t port, - struct ip_conntrack_expect * exp); - -#endif - -#endif diff -puN include/linux/netfilter_ipv4/ip_conntrack_helper.h~git-net /dev/null --- a/include/linux/netfilter_ipv4/ip_conntrack_helper.h +++ /dev/null @@ -1,46 +0,0 @@ -/* IP connection tracking helpers. */ -#ifndef _IP_CONNTRACK_HELPER_H -#define _IP_CONNTRACK_HELPER_H -#include - -struct module; - -struct ip_conntrack_helper -{ - struct list_head list; /* Internal use. */ - - const char *name; /* name of the module */ - struct module *me; /* pointer to self */ - unsigned int max_expected; /* Maximum number of concurrent - * expected connections */ - unsigned int timeout; /* timeout for expecteds */ - - /* Mask of things we will help (compared against server response) */ - struct ip_conntrack_tuple tuple; - struct ip_conntrack_tuple mask; - - /* Function to call when data passes; return verdict, or -1 to - invalidate. */ - int (*help)(struct sk_buff **pskb, - struct ip_conntrack *ct, - enum ip_conntrack_info conntrackinfo); - - void (*destroy)(struct ip_conntrack *ct); - - int (*to_nfattr)(struct sk_buff *skb, const struct ip_conntrack *ct); -}; - -extern int ip_conntrack_helper_register(struct ip_conntrack_helper *); -extern void ip_conntrack_helper_unregister(struct ip_conntrack_helper *); - -/* Allocate space for an expectation: this is mandatory before calling - ip_conntrack_expect_related. You will have to call put afterwards. */ -extern struct ip_conntrack_expect * -ip_conntrack_expect_alloc(struct ip_conntrack *master); -extern void ip_conntrack_expect_put(struct ip_conntrack_expect *exp); - -/* Add an expected connection: can have more than one per connection */ -extern int ip_conntrack_expect_related(struct ip_conntrack_expect *exp); -extern void ip_conntrack_unexpect_related(struct ip_conntrack_expect *exp); - -#endif /*_IP_CONNTRACK_HELPER_H*/ diff -puN include/linux/netfilter_ipv4/ip_conntrack_icmp.h~git-net /dev/null --- a/include/linux/netfilter_ipv4/ip_conntrack_icmp.h +++ /dev/null @@ -1,6 +0,0 @@ -#ifndef _IP_CONNTRACK_ICMP_H -#define _IP_CONNTRACK_ICMP_H - -#include - -#endif /* _IP_CONNTRACK_ICMP_H */ diff -puN include/linux/netfilter_ipv4/ip_conntrack_irc.h~git-net /dev/null --- a/include/linux/netfilter_ipv4/ip_conntrack_irc.h +++ /dev/null @@ -1,32 +0,0 @@ -/* IRC extension for IP connection tracking. - * (C) 2000 by Harald Welte - * based on RR's ip_conntrack_ftp.h - * - * ip_conntrack_irc.h,v 1.6 2000/11/07 18:26:42 laforge Exp - * - * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License - * as published by the Free Software Foundation; either version - * 2 of the License, or (at your option) any later version. - * - * - */ -#ifndef _IP_CONNTRACK_IRC_H -#define _IP_CONNTRACK_IRC_H - -/* This structure exists only once per master */ -struct ip_ct_irc_master { -}; - -#ifdef __KERNEL__ -extern unsigned int (*ip_nat_irc_hook)(struct sk_buff **pskb, - enum ip_conntrack_info ctinfo, - unsigned int matchoff, - unsigned int matchlen, - struct ip_conntrack_expect *exp); - -#define IRC_PORT 6667 - -#endif /* __KERNEL__ */ - -#endif /* _IP_CONNTRACK_IRC_H */ diff -puN include/linux/netfilter_ipv4/ip_conntrack_pptp.h~git-net /dev/null --- a/include/linux/netfilter_ipv4/ip_conntrack_pptp.h +++ /dev/null @@ -1,326 +0,0 @@ -/* PPTP constants and structs */ -#ifndef _CONNTRACK_PPTP_H -#define _CONNTRACK_PPTP_H - -/* state of the control session */ -enum pptp_ctrlsess_state { - PPTP_SESSION_NONE, /* no session present */ - PPTP_SESSION_ERROR, /* some session error */ - PPTP_SESSION_STOPREQ, /* stop_sess request seen */ - PPTP_SESSION_REQUESTED, /* start_sess request seen */ - PPTP_SESSION_CONFIRMED, /* session established */ -}; - -/* state of the call inside the control session */ -enum pptp_ctrlcall_state { - PPTP_CALL_NONE, - PPTP_CALL_ERROR, - PPTP_CALL_OUT_REQ, - PPTP_CALL_OUT_CONF, - PPTP_CALL_IN_REQ, - PPTP_CALL_IN_REP, - PPTP_CALL_IN_CONF, - PPTP_CALL_CLEAR_REQ, -}; - - -/* conntrack private data */ -struct ip_ct_pptp_master { - enum pptp_ctrlsess_state sstate; /* session state */ - - /* everything below is going to be per-expectation in newnat, - * since there could be more than one call within one session */ - enum pptp_ctrlcall_state cstate; /* call state */ - __be16 pac_call_id; /* call id of PAC, host byte order */ - __be16 pns_call_id; /* call id of PNS, host byte order */ - - /* in pre-2.6.11 this used to be per-expect. Now it is per-conntrack - * and therefore imposes a fixed limit on the number of maps */ - struct ip_ct_gre_keymap *keymap_orig, *keymap_reply; -}; - -/* conntrack_expect private member */ -struct ip_ct_pptp_expect { - enum pptp_ctrlcall_state cstate; /* call state */ - __be16 pac_call_id; /* call id of PAC */ - __be16 pns_call_id; /* call id of PNS */ -}; - - -#ifdef __KERNEL__ - -#define IP_CONNTR_PPTP PPTP_CONTROL_PORT - -#define PPTP_CONTROL_PORT 1723 - -#define PPTP_PACKET_CONTROL 1 -#define PPTP_PACKET_MGMT 2 - -#define PPTP_MAGIC_COOKIE 0x1a2b3c4d - -struct pptp_pkt_hdr { - __u16 packetLength; - __be16 packetType; - __be32 magicCookie; -}; - -/* PptpControlMessageType values */ -#define PPTP_START_SESSION_REQUEST 1 -#define PPTP_START_SESSION_REPLY 2 -#define PPTP_STOP_SESSION_REQUEST 3 -#define PPTP_STOP_SESSION_REPLY 4 -#define PPTP_ECHO_REQUEST 5 -#define PPTP_ECHO_REPLY 6 -#define PPTP_OUT_CALL_REQUEST 7 -#define PPTP_OUT_CALL_REPLY 8 -#define PPTP_IN_CALL_REQUEST 9 -#define PPTP_IN_CALL_REPLY 10 -#define PPTP_IN_CALL_CONNECT 11 -#define PPTP_CALL_CLEAR_REQUEST 12 -#define PPTP_CALL_DISCONNECT_NOTIFY 13 -#define PPTP_WAN_ERROR_NOTIFY 14 -#define PPTP_SET_LINK_INFO 15 - -#define PPTP_MSG_MAX 15 - -/* PptpGeneralError values */ -#define PPTP_ERROR_CODE_NONE 0 -#define PPTP_NOT_CONNECTED 1 -#define PPTP_BAD_FORMAT 2 -#define PPTP_BAD_VALUE 3 -#define PPTP_NO_RESOURCE 4 -#define PPTP_BAD_CALLID 5 -#define PPTP_REMOVE_DEVICE_ERROR 6 - -struct PptpControlHeader { - __be16 messageType; - __u16 reserved; -}; - -/* FramingCapability Bitmap Values */ -#define PPTP_FRAME_CAP_ASYNC 0x1 -#define PPTP_FRAME_CAP_SYNC 0x2 - -/* BearerCapability Bitmap Values */ -#define PPTP_BEARER_CAP_ANALOG 0x1 -#define PPTP_BEARER_CAP_DIGITAL 0x2 - -struct PptpStartSessionRequest { - __be16 protocolVersion; - __u16 reserved1; - __be32 framingCapability; - __be32 bearerCapability; - __be16 maxChannels; - __be16 firmwareRevision; - __u8 hostName[64]; - __u8 vendorString[64]; -}; - -/* PptpStartSessionResultCode Values */ -#define PPTP_START_OK 1 -#define PPTP_START_GENERAL_ERROR 2 -#define PPTP_START_ALREADY_CONNECTED 3 -#define PPTP_START_NOT_AUTHORIZED 4 -#define PPTP_START_UNKNOWN_PROTOCOL 5 - -struct PptpStartSessionReply { - __be16 protocolVersion; - __u8 resultCode; - __u8 generalErrorCode; - __be32 framingCapability; - __be32 bearerCapability; - __be16 maxChannels; - __be16 firmwareRevision; - __u8 hostName[64]; - __u8 vendorString[64]; -}; - -/* PptpStopReasons */ -#define PPTP_STOP_NONE 1 -#define PPTP_STOP_PROTOCOL 2 -#define PPTP_STOP_LOCAL_SHUTDOWN 3 - -struct PptpStopSessionRequest { - __u8 reason; - __u8 reserved1; - __u16 reserved2; -}; - -/* PptpStopSessionResultCode */ -#define PPTP_STOP_OK 1 -#define PPTP_STOP_GENERAL_ERROR 2 - -struct PptpStopSessionReply { - __u8 resultCode; - __u8 generalErrorCode; - __u16 reserved1; -}; - -struct PptpEchoRequest { - __be32 identNumber; -}; - -/* PptpEchoReplyResultCode */ -#define PPTP_ECHO_OK 1 -#define PPTP_ECHO_GENERAL_ERROR 2 - -struct PptpEchoReply { - __be32 identNumber; - __u8 resultCode; - __u8 generalErrorCode; - __u16 reserved; -}; - -/* PptpFramingType */ -#define PPTP_ASYNC_FRAMING 1 -#define PPTP_SYNC_FRAMING 2 -#define PPTP_DONT_CARE_FRAMING 3 - -/* PptpCallBearerType */ -#define PPTP_ANALOG_TYPE 1 -#define PPTP_DIGITAL_TYPE 2 -#define PPTP_DONT_CARE_BEARER_TYPE 3 - -struct PptpOutCallRequest { - __be16 callID; - __be16 callSerialNumber; - __be32 minBPS; - __be32 maxBPS; - __be32 bearerType; - __be32 framingType; - __be16 packetWindow; - __be16 packetProcDelay; - __be16 phoneNumberLength; - __u16 reserved1; - __u8 phoneNumber[64]; - __u8 subAddress[64]; -}; - -/* PptpCallResultCode */ -#define PPTP_OUTCALL_CONNECT 1 -#define PPTP_OUTCALL_GENERAL_ERROR 2 -#define PPTP_OUTCALL_NO_CARRIER 3 -#define PPTP_OUTCALL_BUSY 4 -#define PPTP_OUTCALL_NO_DIAL_TONE 5 -#define PPTP_OUTCALL_TIMEOUT 6 -#define PPTP_OUTCALL_DONT_ACCEPT 7 - -struct PptpOutCallReply { - __be16 callID; - __be16 peersCallID; - __u8 resultCode; - __u8 generalErrorCode; - __be16 causeCode; - __be32 connectSpeed; - __be16 packetWindow; - __be16 packetProcDelay; - __be32 physChannelID; -}; - -struct PptpInCallRequest { - __be16 callID; - __be16 callSerialNumber; - __be32 callBearerType; - __be32 physChannelID; - __be16 dialedNumberLength; - __be16 dialingNumberLength; - __u8 dialedNumber[64]; - __u8 dialingNumber[64]; - __u8 subAddress[64]; -}; - -/* PptpInCallResultCode */ -#define PPTP_INCALL_ACCEPT 1 -#define PPTP_INCALL_GENERAL_ERROR 2 -#define PPTP_INCALL_DONT_ACCEPT 3 - -struct PptpInCallReply { - __be16 callID; - __be16 peersCallID; - __u8 resultCode; - __u8 generalErrorCode; - __be16 packetWindow; - __be16 packetProcDelay; - __u16 reserved; -}; - -struct PptpInCallConnected { - __be16 peersCallID; - __u16 reserved; - __be32 connectSpeed; - __be16 packetWindow; - __be16 packetProcDelay; - __be32 callFramingType; -}; - -struct PptpClearCallRequest { - __be16 callID; - __u16 reserved; -}; - -struct PptpCallDisconnectNotify { - __be16 callID; - __u8 resultCode; - __u8 generalErrorCode; - __be16 causeCode; - __u16 reserved; - __u8 callStatistics[128]; -}; - -struct PptpWanErrorNotify { - __be16 peersCallID; - __u16 reserved; - __be32 crcErrors; - __be32 framingErrors; - __be32 hardwareOverRuns; - __be32 bufferOverRuns; - __be32 timeoutErrors; - __be32 alignmentErrors; -}; - -struct PptpSetLinkInfo { - __be16 peersCallID; - __u16 reserved; - __be32 sendAccm; - __be32 recvAccm; -}; - -union pptp_ctrl_union { - struct PptpStartSessionRequest sreq; - struct PptpStartSessionReply srep; - struct PptpStopSessionRequest streq; - struct PptpStopSessionReply strep; - struct PptpOutCallRequest ocreq; - struct PptpOutCallReply ocack; - struct PptpInCallRequest icreq; - struct PptpInCallReply icack; - struct PptpInCallConnected iccon; - struct PptpClearCallRequest clrreq; - struct PptpCallDisconnectNotify disc; - struct PptpWanErrorNotify wanerr; - struct PptpSetLinkInfo setlink; -}; - -extern int -(*ip_nat_pptp_hook_outbound)(struct sk_buff **pskb, - struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - struct PptpControlHeader *ctlh, - union pptp_ctrl_union *pptpReq); - -extern int -(*ip_nat_pptp_hook_inbound)(struct sk_buff **pskb, - struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - struct PptpControlHeader *ctlh, - union pptp_ctrl_union *pptpReq); - -extern void -(*ip_nat_pptp_hook_exp_gre)(struct ip_conntrack_expect *exp_orig, - struct ip_conntrack_expect *exp_reply); - -extern void -(*ip_nat_pptp_hook_expectfn)(struct ip_conntrack *ct, - struct ip_conntrack_expect *exp); -#endif /* __KERNEL__ */ -#endif /* _CONNTRACK_PPTP_H */ diff -puN include/linux/netfilter_ipv4/ip_conntrack_proto_gre.h~git-net /dev/null --- a/include/linux/netfilter_ipv4/ip_conntrack_proto_gre.h +++ /dev/null @@ -1,114 +0,0 @@ -#ifndef _CONNTRACK_PROTO_GRE_H -#define _CONNTRACK_PROTO_GRE_H -#include - -/* GRE PROTOCOL HEADER */ - -/* GRE Version field */ -#define GRE_VERSION_1701 0x0 -#define GRE_VERSION_PPTP 0x1 - -/* GRE Protocol field */ -#define GRE_PROTOCOL_PPTP 0x880B - -/* GRE Flags */ -#define GRE_FLAG_C 0x80 -#define GRE_FLAG_R 0x40 -#define GRE_FLAG_K 0x20 -#define GRE_FLAG_S 0x10 -#define GRE_FLAG_A 0x80 - -#define GRE_IS_C(f) ((f)&GRE_FLAG_C) -#define GRE_IS_R(f) ((f)&GRE_FLAG_R) -#define GRE_IS_K(f) ((f)&GRE_FLAG_K) -#define GRE_IS_S(f) ((f)&GRE_FLAG_S) -#define GRE_IS_A(f) ((f)&GRE_FLAG_A) - -/* GRE is a mess: Four different standards */ -struct gre_hdr { -#if defined(__LITTLE_ENDIAN_BITFIELD) - __u16 rec:3, - srr:1, - seq:1, - key:1, - routing:1, - csum:1, - version:3, - reserved:4, - ack:1; -#elif defined(__BIG_ENDIAN_BITFIELD) - __u16 csum:1, - routing:1, - key:1, - seq:1, - srr:1, - rec:3, - ack:1, - reserved:4, - version:3; -#else -#error "Adjust your defines" -#endif - __be16 protocol; -}; - -/* modified GRE header for PPTP */ -struct gre_hdr_pptp { - __u8 flags; /* bitfield */ - __u8 version; /* should be GRE_VERSION_PPTP */ - __be16 protocol; /* should be GRE_PROTOCOL_PPTP */ - __be16 payload_len; /* size of ppp payload, not inc. gre header */ - __be16 call_id; /* peer's call_id for this session */ - __be32 seq; /* sequence number. Present if S==1 */ - __be32 ack; /* seq number of highest packet recieved by */ - /* sender in this session */ -}; - - -/* this is part of ip_conntrack */ -struct ip_ct_gre { - unsigned int stream_timeout; - unsigned int timeout; -}; - -#ifdef __KERNEL__ -struct ip_conntrack_expect; -struct ip_conntrack; - -/* structure for original <-> reply keymap */ -struct ip_ct_gre_keymap { - struct list_head list; - - struct ip_conntrack_tuple tuple; -}; - -/* add new tuple->key_reply pair to keymap */ -int ip_ct_gre_keymap_add(struct ip_conntrack *ct, - struct ip_conntrack_tuple *t, - int reply); - -/* delete keymap entries */ -void ip_ct_gre_keymap_destroy(struct ip_conntrack *ct); - - -/* get pointer to gre key, if present */ -static inline __be32 *gre_key(struct gre_hdr *greh) -{ - if (!greh->key) - return NULL; - if (greh->csum || greh->routing) - return (__be32 *) (greh+sizeof(*greh)+4); - return (__be32 *) (greh+sizeof(*greh)); -} - -/* get pointer ot gre csum, if present */ -static inline __sum16 *gre_csum(struct gre_hdr *greh) -{ - if (!greh->csum) - return NULL; - return (__sum16 *) (greh+sizeof(*greh)); -} - -#endif /* __KERNEL__ */ - -#endif /* _CONNTRACK_PROTO_GRE_H */ diff -puN include/linux/netfilter_ipv4/ip_conntrack_protocol.h~git-net /dev/null --- a/include/linux/netfilter_ipv4/ip_conntrack_protocol.h +++ /dev/null @@ -1,98 +0,0 @@ -/* Header for use in defining a given protocol for connection tracking. */ -#ifndef _IP_CONNTRACK_PROTOCOL_H -#define _IP_CONNTRACK_PROTOCOL_H -#include -#include - -struct seq_file; - -struct ip_conntrack_protocol -{ - /* Protocol number. */ - u_int8_t proto; - - /* Protocol name */ - const char *name; - - /* Try to fill in the third arg: dataoff is offset past IP - hdr. Return true if possible. */ - int (*pkt_to_tuple)(const struct sk_buff *skb, - unsigned int dataoff, - struct ip_conntrack_tuple *tuple); - - /* Invert the per-proto part of the tuple: ie. turn xmit into reply. - * Some packets can't be inverted: return 0 in that case. - */ - int (*invert_tuple)(struct ip_conntrack_tuple *inverse, - const struct ip_conntrack_tuple *orig); - - /* Print out the per-protocol part of the tuple. Return like seq_* */ - int (*print_tuple)(struct seq_file *, - const struct ip_conntrack_tuple *); - - /* Print out the private part of the conntrack. */ - int (*print_conntrack)(struct seq_file *, const struct ip_conntrack *); - - /* Returns verdict for packet, or -1 for invalid. */ - int (*packet)(struct ip_conntrack *conntrack, - const struct sk_buff *skb, - enum ip_conntrack_info ctinfo); - - /* Called when a new connection for this protocol found; - * returns TRUE if it's OK. If so, packet() called next. */ - int (*new)(struct ip_conntrack *conntrack, const struct sk_buff *skb); - - /* Called when a conntrack entry is destroyed */ - void (*destroy)(struct ip_conntrack *conntrack); - - int (*error)(struct sk_buff *skb, enum ip_conntrack_info *ctinfo, - unsigned int hooknum); - - /* convert protoinfo to nfnetink attributes */ - int (*to_nfattr)(struct sk_buff *skb, struct nfattr *nfa, - const struct ip_conntrack *ct); - - /* convert nfnetlink attributes to protoinfo */ - int (*from_nfattr)(struct nfattr *tb[], struct ip_conntrack *ct); - - int (*tuple_to_nfattr)(struct sk_buff *skb, - const struct ip_conntrack_tuple *t); - int (*nfattr_to_tuple)(struct nfattr *tb[], - struct ip_conntrack_tuple *t); - - /* Module (if any) which this is connected to. */ - struct module *me; -}; - -/* Protocol registration. */ -extern int ip_conntrack_protocol_register(struct ip_conntrack_protocol *proto); -extern void ip_conntrack_protocol_unregister(struct ip_conntrack_protocol *proto); -/* Existing built-in protocols */ -extern struct ip_conntrack_protocol ip_conntrack_protocol_tcp; -extern struct ip_conntrack_protocol ip_conntrack_protocol_udp; -extern struct ip_conntrack_protocol ip_conntrack_protocol_icmp; -extern struct ip_conntrack_protocol ip_conntrack_generic_protocol; -extern int ip_conntrack_protocol_tcp_init(void); - -/* Log invalid packets */ -extern unsigned int ip_ct_log_invalid; - -extern int ip_ct_port_tuple_to_nfattr(struct sk_buff *, - const struct ip_conntrack_tuple *); -extern int ip_ct_port_nfattr_to_tuple(struct nfattr *tb[], - struct ip_conntrack_tuple *); - -#ifdef CONFIG_SYSCTL -#ifdef DEBUG_INVALID_PACKETS -#define LOG_INVALID(proto) \ - (ip_ct_log_invalid == (proto) || ip_ct_log_invalid == IPPROTO_RAW) -#else -#define LOG_INVALID(proto) \ - ((ip_ct_log_invalid == (proto) || ip_ct_log_invalid == IPPROTO_RAW) \ - && net_ratelimit()) -#endif -#else -#define LOG_INVALID(proto) 0 -#endif /* CONFIG_SYSCTL */ - -#endif /*_IP_CONNTRACK_PROTOCOL_H*/ diff -puN include/linux/netfilter_ipv4/ip_conntrack_sctp.h~git-net /dev/null --- a/include/linux/netfilter_ipv4/ip_conntrack_sctp.h +++ /dev/null @@ -1,6 +0,0 @@ -#ifndef _IP_CONNTRACK_SCTP_H -#define _IP_CONNTRACK_SCTP_H - -#include - -#endif /* _IP_CONNTRACK_SCTP_H */ diff -puN include/linux/netfilter_ipv4/ip_conntrack_sip.h~git-net /dev/null --- a/include/linux/netfilter_ipv4/ip_conntrack_sip.h +++ /dev/null @@ -1,40 +0,0 @@ -#ifndef __IP_CONNTRACK_SIP_H__ -#define __IP_CONNTRACK_SIP_H__ -#ifdef __KERNEL__ - -#define SIP_PORT 5060 -#define SIP_TIMEOUT 3600 - -enum sip_header_pos { - POS_REG_REQ_URI, - POS_REQ_URI, - POS_FROM, - POS_TO, - POS_VIA, - POS_CONTACT, - POS_CONTENT, - POS_MEDIA, - POS_OWNER, - POS_CONNECTION, - POS_SDP_HEADER, -}; - -extern unsigned int (*ip_nat_sip_hook)(struct sk_buff **pskb, - enum ip_conntrack_info ctinfo, - struct ip_conntrack *ct, - const char **dptr); -extern unsigned int (*ip_nat_sdp_hook)(struct sk_buff **pskb, - enum ip_conntrack_info ctinfo, - struct ip_conntrack_expect *exp, - const char *dptr); - -extern int ct_sip_get_info(const char *dptr, size_t dlen, - unsigned int *matchoff, - unsigned int *matchlen, - enum sip_header_pos pos); -extern int ct_sip_lnlen(const char *line, const char *limit); -extern const char *ct_sip_search(const char *needle, const char *haystack, - size_t needle_len, size_t haystack_len, - int case_sensitive); -#endif /* __KERNEL__ */ -#endif /* __IP_CONNTRACK_SIP_H__ */ diff -puN include/linux/netfilter_ipv4/ip_conntrack_tcp.h~git-net /dev/null --- a/include/linux/netfilter_ipv4/ip_conntrack_tcp.h +++ /dev/null @@ -1,6 +0,0 @@ -#ifndef _IP_CONNTRACK_TCP_H -#define _IP_CONNTRACK_TCP_H - -#include - -#endif /* _IP_CONNTRACK_TCP_H */ diff -puN include/linux/netfilter_ipv4/ip_conntrack_tftp.h~git-net /dev/null --- a/include/linux/netfilter_ipv4/ip_conntrack_tftp.h +++ /dev/null @@ -1,20 +0,0 @@ -#ifndef _IP_CT_TFTP -#define _IP_CT_TFTP - -#define TFTP_PORT 69 - -struct tftphdr { - __be16 opcode; -}; - -#define TFTP_OPCODE_READ 1 -#define TFTP_OPCODE_WRITE 2 -#define TFTP_OPCODE_DATA 3 -#define TFTP_OPCODE_ACK 4 -#define TFTP_OPCODE_ERROR 5 - -extern unsigned int (*ip_nat_tftp_hook)(struct sk_buff **pskb, - enum ip_conntrack_info ctinfo, - struct ip_conntrack_expect *exp); - -#endif /* _IP_CT_TFTP */ diff -puN include/linux/netfilter_ipv4/ip_conntrack_tuple.h~git-net /dev/null --- a/include/linux/netfilter_ipv4/ip_conntrack_tuple.h +++ /dev/null @@ -1,146 +0,0 @@ -#ifndef _IP_CONNTRACK_TUPLE_H -#define _IP_CONNTRACK_TUPLE_H - -#include -#include - -/* A `tuple' is a structure containing the information to uniquely - identify a connection. ie. if two packets have the same tuple, they - are in the same connection; if not, they are not. - - We divide the structure along "manipulatable" and - "non-manipulatable" lines, for the benefit of the NAT code. -*/ - -/* The protocol-specific manipulable parts of the tuple: always in - network order! */ -union ip_conntrack_manip_proto -{ - /* Add other protocols here. */ - u_int16_t all; - - struct { - __be16 port; - } tcp; - struct { - __be16 port; - } udp; - struct { - __be16 id; - } icmp; - struct { - __be16 port; - } sctp; - struct { - __be16 key; /* key is 32bit, pptp only uses 16 */ - } gre; -}; - -/* The manipulable part of the tuple. */ -struct ip_conntrack_manip -{ - __be32 ip; - union ip_conntrack_manip_proto u; -}; - -/* This contains the information to distinguish a connection. */ -struct ip_conntrack_tuple -{ - struct ip_conntrack_manip src; - - /* These are the parts of the tuple which are fixed. */ - struct { - __be32 ip; - union { - /* Add other protocols here. */ - u_int16_t all; - - struct { - __be16 port; - } tcp; - struct { - __be16 port; - } udp; - struct { - u_int8_t type, code; - } icmp; - struct { - __be16 port; - } sctp; - struct { - __be16 key; /* key is 32bit, - * pptp only uses 16 */ - } gre; - } u; - - /* The protocol. */ - u_int8_t protonum; - - /* The direction (for tuplehash) */ - u_int8_t dir; - } dst; -}; - -/* This is optimized opposed to a memset of the whole structure. Everything we - * really care about is the source/destination unions */ -#define IP_CT_TUPLE_U_BLANK(tuple) \ - do { \ - (tuple)->src.u.all = 0; \ - (tuple)->dst.u.all = 0; \ - } while (0) - -#ifdef __KERNEL__ - -#define DUMP_TUPLE(tp) \ -DEBUGP("tuple %p: %u %u.%u.%u.%u:%hu -> %u.%u.%u.%u:%hu\n", \ - (tp), (tp)->dst.protonum, \ - NIPQUAD((tp)->src.ip), ntohs((tp)->src.u.all), \ - NIPQUAD((tp)->dst.ip), ntohs((tp)->dst.u.all)) - -/* If we're the first tuple, it's the original dir. */ -#define DIRECTION(h) ((enum ip_conntrack_dir)(h)->tuple.dst.dir) - -/* Connections have two entries in the hash table: one for each way */ -struct ip_conntrack_tuple_hash -{ - struct list_head list; - - struct ip_conntrack_tuple tuple; -}; - -#endif /* __KERNEL__ */ - -static inline int ip_ct_tuple_src_equal(const struct ip_conntrack_tuple *t1, - const struct ip_conntrack_tuple *t2) -{ - return t1->src.ip == t2->src.ip - && t1->src.u.all == t2->src.u.all; -} - -static inline int ip_ct_tuple_dst_equal(const struct ip_conntrack_tuple *t1, - const struct ip_conntrack_tuple *t2) -{ - return t1->dst.ip == t2->dst.ip - && t1->dst.u.all == t2->dst.u.all - && t1->dst.protonum == t2->dst.protonum; -} - -static inline int ip_ct_tuple_equal(const struct ip_conntrack_tuple *t1, - const struct ip_conntrack_tuple *t2) -{ - return ip_ct_tuple_src_equal(t1, t2) && ip_ct_tuple_dst_equal(t1, t2); -} - -static inline int ip_ct_tuple_mask_cmp(const struct ip_conntrack_tuple *t, - const struct ip_conntrack_tuple *tuple, - const struct ip_conntrack_tuple *mask) -{ - return !(((t->src.ip ^ tuple->src.ip) & mask->src.ip) - || ((t->dst.ip ^ tuple->dst.ip) & mask->dst.ip) - || ((t->src.u.all ^ tuple->src.u.all) & mask->src.u.all) - || ((t->dst.u.all ^ tuple->dst.u.all) & mask->dst.u.all) - || ((t->dst.protonum ^ tuple->dst.protonum) - & mask->dst.protonum)); -} - -#endif /* _IP_CONNTRACK_TUPLE_H */ diff -puN include/linux/netfilter_ipv4/ip_nat.h~git-net /dev/null --- a/include/linux/netfilter_ipv4/ip_nat.h +++ /dev/null @@ -1,79 +0,0 @@ -#ifndef _IP_NAT_H -#define _IP_NAT_H -#include -#include - -#define IP_NAT_MAPPING_TYPE_MAX_NAMELEN 16 - -enum ip_nat_manip_type -{ - IP_NAT_MANIP_SRC, - IP_NAT_MANIP_DST -}; - -/* SRC manip occurs POST_ROUTING or LOCAL_IN */ -#define HOOK2MANIP(hooknum) ((hooknum) != NF_IP_POST_ROUTING && (hooknum) != NF_IP_LOCAL_IN) - -#define IP_NAT_RANGE_MAP_IPS 1 -#define IP_NAT_RANGE_PROTO_SPECIFIED 2 -#define IP_NAT_RANGE_PROTO_RANDOM 4 /* add randomness to "port" selection */ - -/* NAT sequence number modifications */ -struct ip_nat_seq { - /* position of the last TCP sequence number - * modification (if any) */ - u_int32_t correction_pos; - /* sequence number offset before and after last modification */ - int16_t offset_before, offset_after; -}; - -/* Single range specification. */ -struct ip_nat_range -{ - /* Set to OR of flags above. */ - unsigned int flags; - - /* Inclusive: network order. */ - __be32 min_ip, max_ip; - - /* Inclusive: network order */ - union ip_conntrack_manip_proto min, max; -}; - -/* For backwards compat: don't use in modern code. */ -struct ip_nat_multi_range_compat -{ - unsigned int rangesize; /* Must be 1. */ - - /* hangs off end. */ - struct ip_nat_range range[1]; -}; - -#ifdef __KERNEL__ -#include - -/* Protects NAT hash tables, and NAT-private part of conntracks. */ -extern rwlock_t ip_nat_lock; - -/* The structure embedded in the conntrack structure. */ -struct ip_nat_info -{ - struct list_head bysource; - struct ip_nat_seq seq[IP_CT_DIR_MAX]; -}; - -struct ip_conntrack; - -/* Set up the info structure to map into this range. */ -extern unsigned int ip_nat_setup_info(struct ip_conntrack *conntrack, - const struct ip_nat_range *range, - unsigned int hooknum); - -/* Is this tuple already taken? (not by us)*/ -extern int ip_nat_used_tuple(const struct ip_conntrack_tuple *tuple, - const struct ip_conntrack *ignored_conntrack); - -#else /* !__KERNEL__: iptables wants this to compile. */ -#define ip_nat_multi_range ip_nat_multi_range_compat -#endif /*__KERNEL__*/ -#endif diff -puN include/linux/netfilter_ipv4/ip_nat_core.h~git-net /dev/null --- a/include/linux/netfilter_ipv4/ip_nat_core.h +++ /dev/null @@ -1,18 +0,0 @@ -#ifndef _IP_NAT_CORE_H -#define _IP_NAT_CORE_H -#include -#include - -/* This header used to share core functionality between the standalone - NAT module, and the compatibility layer's use of NAT for masquerading. */ - -extern unsigned int ip_nat_packet(struct ip_conntrack *ct, - enum ip_conntrack_info conntrackinfo, - unsigned int hooknum, - struct sk_buff **pskb); - -extern int ip_nat_icmp_reply_translation(struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - unsigned int hooknum, - struct sk_buff **pskb); -#endif /* _IP_NAT_CORE_H */ diff -puN include/linux/netfilter_ipv4/ip_nat_helper.h~git-net /dev/null --- a/include/linux/netfilter_ipv4/ip_nat_helper.h +++ /dev/null @@ -1,33 +0,0 @@ -#ifndef _IP_NAT_HELPER_H -#define _IP_NAT_HELPER_H -/* NAT protocol helper routines. */ - -#include -#include - -struct sk_buff; - -/* These return true or false. */ -extern int ip_nat_mangle_tcp_packet(struct sk_buff **skb, - struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - unsigned int match_offset, - unsigned int match_len, - const char *rep_buffer, - unsigned int rep_len); -extern int ip_nat_mangle_udp_packet(struct sk_buff **skb, - struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - unsigned int match_offset, - unsigned int match_len, - const char *rep_buffer, - unsigned int rep_len); -extern int ip_nat_seq_adjust(struct sk_buff **pskb, - struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo); - -/* Setup NAT on this expected conntrack so it follows master, but goes - * to port ct->master->saved_proto. */ -extern void ip_nat_follow_master(struct ip_conntrack *ct, - struct ip_conntrack_expect *this); -#endif diff -puN include/linux/netfilter_ipv4/ip_nat_pptp.h~git-net /dev/null --- a/include/linux/netfilter_ipv4/ip_nat_pptp.h +++ /dev/null @@ -1,11 +0,0 @@ -/* PPTP constants and structs */ -#ifndef _NAT_PPTP_H -#define _NAT_PPTP_H - -/* conntrack private data */ -struct ip_nat_pptp { - __be16 pns_call_id; /* NAT'ed PNS call id */ - __be16 pac_call_id; /* NAT'ed PAC call id */ -}; - -#endif /* _NAT_PPTP_H */ diff -puN include/linux/netfilter_ipv4/ip_nat_protocol.h~git-net /dev/null --- a/include/linux/netfilter_ipv4/ip_nat_protocol.h +++ /dev/null @@ -1,74 +0,0 @@ -/* Header for use in defining a given protocol. */ -#ifndef _IP_NAT_PROTOCOL_H -#define _IP_NAT_PROTOCOL_H -#include -#include - -#include -#include - -struct iphdr; -struct ip_nat_range; - -struct ip_nat_protocol -{ - /* Protocol name */ - const char *name; - - /* Protocol number. */ - unsigned int protonum; - - struct module *me; - - /* Translate a packet to the target according to manip type. - Return true if succeeded. */ - int (*manip_pkt)(struct sk_buff **pskb, - unsigned int iphdroff, - const struct ip_conntrack_tuple *tuple, - enum ip_nat_manip_type maniptype); - - /* Is the manipable part of the tuple between min and max incl? */ - int (*in_range)(const struct ip_conntrack_tuple *tuple, - enum ip_nat_manip_type maniptype, - const union ip_conntrack_manip_proto *min, - const union ip_conntrack_manip_proto *max); - - /* Alter the per-proto part of the tuple (depending on - maniptype), to give a unique tuple in the given range if - possible; return false if not. Per-protocol part of tuple - is initialized to the incoming packet. */ - int (*unique_tuple)(struct ip_conntrack_tuple *tuple, - const struct ip_nat_range *range, - enum ip_nat_manip_type maniptype, - const struct ip_conntrack *conntrack); - - int (*range_to_nfattr)(struct sk_buff *skb, - const struct ip_nat_range *range); - - int (*nfattr_to_range)(struct nfattr *tb[], - struct ip_nat_range *range); -}; - -/* Protocol registration. */ -extern int ip_nat_protocol_register(struct ip_nat_protocol *proto); -extern void ip_nat_protocol_unregister(struct ip_nat_protocol *proto); - -extern struct ip_nat_protocol *ip_nat_proto_find_get(u_int8_t protocol); -extern void ip_nat_proto_put(struct ip_nat_protocol *proto); - -/* Built-in protocols. */ -extern struct ip_nat_protocol ip_nat_protocol_tcp; -extern struct ip_nat_protocol ip_nat_protocol_udp; -extern struct ip_nat_protocol ip_nat_protocol_icmp; -extern struct ip_nat_protocol ip_nat_unknown_protocol; - -extern int init_protocols(void) __init; -extern void cleanup_protocols(void); -extern struct ip_nat_protocol *find_nat_proto(u_int16_t protonum); - -extern int ip_nat_port_range_to_nfattr(struct sk_buff *skb, - const struct ip_nat_range *range); -extern int ip_nat_port_nfattr_to_range(struct nfattr *tb[], - struct ip_nat_range *range); - -#endif /*_IP_NAT_PROTO_H*/ diff -puN include/linux/netfilter_ipv4/ip_nat_rule.h~git-net /dev/null --- a/include/linux/netfilter_ipv4/ip_nat_rule.h +++ /dev/null @@ -1,28 +0,0 @@ -#ifndef _IP_NAT_RULE_H -#define _IP_NAT_RULE_H -#include -#include -#include - -#ifdef __KERNEL__ - -extern int ip_nat_rule_init(void) __init; -extern void ip_nat_rule_cleanup(void); -extern int ip_nat_rule_find(struct sk_buff **pskb, - unsigned int hooknum, - const struct net_device *in, - const struct net_device *out, - struct ip_conntrack *ct, - struct ip_nat_info *info); - -extern unsigned int -alloc_null_binding(struct ip_conntrack *conntrack, - struct ip_nat_info *info, - unsigned int hooknum); - -extern unsigned int -alloc_null_binding_confirmed(struct ip_conntrack *conntrack, - struct ip_nat_info *info, - unsigned int hooknum); -#endif -#endif /* _IP_NAT_RULE_H */ diff -puN include/linux/netfilter_ipv4/ipt_SAME.h~git-net include/linux/netfilter_ipv4/ipt_SAME.h --- a/include/linux/netfilter_ipv4/ipt_SAME.h~git-net +++ a/include/linux/netfilter_ipv4/ipt_SAME.h @@ -13,7 +13,7 @@ struct ipt_same_info u_int32_t *iparray; /* hangs off end. */ - struct ip_nat_range range[IPT_SAME_MAX_RANGE]; + struct nf_nat_range range[IPT_SAME_MAX_RANGE]; }; #endif /*_IPT_SAME_H*/ diff -puN include/linux/netlink.h~git-net include/linux/netlink.h --- a/include/linux/netlink.h~git-net +++ a/include/linux/netlink.h @@ -138,6 +138,11 @@ struct nlattr #include #include +static inline struct nlmsghdr *nlmsg_hdr(const struct sk_buff *skb) +{ + return (struct nlmsghdr *)skb->data; +} + struct netlink_skb_parms { struct ucred creds; /* Skb credentials */ @@ -152,7 +157,10 @@ struct netlink_skb_parms #define NETLINK_CREDS(skb) (&NETLINK_CB((skb)).creds) -extern struct sock *netlink_kernel_create(int unit, unsigned int groups, void (*input)(struct sock *sk, int len), struct module *module); +extern struct sock *netlink_kernel_create(int unit, unsigned int groups, + void (*input)(struct sock *sk, int len), + struct mutex *cb_mutex, + struct module *module); extern void netlink_ack(struct sk_buff *in_skb, struct nlmsghdr *nlh, int err); extern int netlink_has_listeners(struct sock *sk, unsigned int group); extern int netlink_unicast(struct sock *ssk, struct sk_buff *skb, __u32 pid, int nonblock); @@ -171,9 +179,16 @@ int netlink_sendskb(struct sock *sk, str /* * skb should fit one page. This choice is good for headerless malloc. + * But we should limit to 8K so that userspace does not have to + * use enormous buffer sizes on recvmsg() calls just to avoid + * MSG_TRUNC when PAGE_SIZE is very large. */ -#define NLMSG_GOODORDER 0 -#define NLMSG_GOODSIZE (SKB_MAX_ORDER(0, NLMSG_GOODORDER)) +#if PAGE_SIZE < 8192UL +#define NLMSG_GOODSIZE SKB_WITH_OVERHEAD(PAGE_SIZE) +#else +#define NLMSG_GOODSIZE SKB_WITH_OVERHEAD(8192UL) +#endif + #define NLMSG_DEFAULT_SIZE (NLMSG_GOODSIZE - NLMSG_HDRLEN) @@ -217,18 +232,6 @@ __nlmsg_put(struct sk_buff *skb, u32 pid #define NLMSG_PUT(skb, pid, seq, type, len) \ NLMSG_NEW(skb, pid, seq, type, len, 0) -#define NLMSG_NEW_ANSWER(skb, cb, type, len, flags) \ - NLMSG_NEW(skb, NETLINK_CB((cb)->skb).pid, \ - (cb)->nlh->nlmsg_seq, type, len, flags) - -#define NLMSG_END(skb, nlh) \ -({ (nlh)->nlmsg_len = (skb)->tail - (unsigned char *) (nlh); \ - (skb)->len; }) - -#define NLMSG_CANCEL(skb, nlh) \ -({ skb_trim(skb, (unsigned char *) (nlh) - (skb)->data); \ - -1; }) - extern int netlink_dump_start(struct sock *ssk, struct sk_buff *skb, struct nlmsghdr *nlh, int (*dump)(struct sk_buff *skb, struct netlink_callback*), diff -puN /dev/null include/linux/nl80211.h --- /dev/null +++ a/include/linux/nl80211.h @@ -0,0 +1,38 @@ +#ifndef __LINUX_NL80211_H +#define __LINUX_NL80211_H +/* + * 802.11 netlink interface public header + * + * Copyright 2006, 2007 Johannes Berg + */ + +/** + * enum nl80211_iftype - (virtual) interface types + * @NL80211_IFTYPE_UNSPECIFIED: unspecified type, driver decides + * @NL80211_IFTYPE_ADHOC: independent BSS member + * @NL80211_IFTYPE_STATION: managed BSS member + * @NL80211_IFTYPE_AP: access point + * @NL80211_IFTYPE_AP_VLAN: VLAN interface for access points + * @NL80211_IFTYPE_WDS: wireless distribution interface + * @NL80211_IFTYPE_MONITOR: monitor interface receiving all frames + * @__NL80211_IFTYPE_AFTER_LAST: internal use + * + * These values are used with the NL80211_ATTR_IFTYPE + * to set the type of an interface. + * + */ +enum nl80211_iftype { + NL80211_IFTYPE_UNSPECIFIED, + NL80211_IFTYPE_ADHOC, + NL80211_IFTYPE_STATION, + NL80211_IFTYPE_AP, + NL80211_IFTYPE_AP_VLAN, + NL80211_IFTYPE_WDS, + NL80211_IFTYPE_MONITOR, + + /* keep last */ + __NL80211_IFTYPE_AFTER_LAST +}; +#define NL80211_IFTYPE_MAX (__NL80211_IFTYPE_AFTER_LAST - 1) + +#endif /* __LINUX_NL80211_H */ diff -puN include/linux/rtnetlink.h~git-net include/linux/rtnetlink.h --- a/include/linux/rtnetlink.h~git-net +++ a/include/linux/rtnetlink.h @@ -574,13 +574,6 @@ extern int rtattr_parse(struct rtattr *t #define rtattr_parse_nested(tb, max, rta) \ rtattr_parse((tb), (max), RTA_DATA((rta)), RTA_PAYLOAD((rta))) -struct rtnetlink_link -{ - int (*doit)(struct sk_buff *, struct nlmsghdr*, void *attr); - int (*dumpit)(struct sk_buff *, struct netlink_callback *cb); -}; - -extern struct rtnetlink_link * rtnetlink_links[NPROTO]; extern int rtnetlink_send(struct sk_buff *skb, u32 pid, u32 group, int echo); extern int rtnl_unicast(struct sk_buff *skb, u32 pid); extern int rtnl_notify(struct sk_buff *skb, u32 pid, u32 group, @@ -605,7 +598,7 @@ extern void __rta_fill(struct sk_buff *s #define RTA_PUT_NOHDR(skb, attrlen, data) \ ({ RTA_APPEND(skb, RTA_ALIGN(attrlen), data); \ - memset(skb->tail - (RTA_ALIGN(attrlen) - attrlen), 0, \ + memset(skb_tail_pointer(skb) - (RTA_ALIGN(attrlen) - attrlen), 0, \ RTA_ALIGN(attrlen) - attrlen); }) #define RTA_PUT_U8(skb, attrtype, value) \ @@ -637,12 +630,12 @@ extern void __rta_fill(struct sk_buff *s RTA_PUT(skb, attrtype, 0, NULL); #define RTA_NEST(skb, type) \ -({ struct rtattr *__start = (struct rtattr *) (skb)->tail; \ +({ struct rtattr *__start = (struct rtattr *)skb_tail_pointer(skb); \ RTA_PUT(skb, type, 0, NULL); \ __start; }) #define RTA_NEST_END(skb, start) \ -({ (start)->rta_len = ((skb)->tail - (unsigned char *) (start)); \ +({ (start)->rta_len = skb_tail_pointer(skb) - (unsigned char *)(start); \ (skb)->len; }) #define RTA_NEST_CANCEL(skb, start) \ diff -puN include/linux/sctp.h~git-net include/linux/sctp.h --- a/include/linux/sctp.h~git-net +++ a/include/linux/sctp.h @@ -63,6 +63,15 @@ typedef struct sctphdr { __be32 checksum; } __attribute__((packed)) sctp_sctphdr_t; +#ifdef __KERNEL__ +#include + +static inline struct sctphdr *sctp_hdr(const struct sk_buff *skb) +{ + return (struct sctphdr *)skb_transport_header(skb); +} +#endif + /* Section 3.2. Chunk Field Descriptions. */ typedef struct sctp_chunkhdr { __u8 type; diff -puN include/linux/skbuff.h~git-net include/linux/skbuff.h --- a/include/linux/skbuff.h~git-net +++ a/include/linux/skbuff.h @@ -27,20 +27,24 @@ #include #include #include +#include #define HAVE_ALLOC_SKB /* For the drivers to know */ #define HAVE_ALIGNABLE_SKB /* Ditto 8) */ +/* Don't change this without changing skb_csum_unnecessary! */ #define CHECKSUM_NONE 0 -#define CHECKSUM_PARTIAL 1 -#define CHECKSUM_UNNECESSARY 2 -#define CHECKSUM_COMPLETE 3 +#define CHECKSUM_UNNECESSARY 1 +#define CHECKSUM_COMPLETE 2 +#define CHECKSUM_PARTIAL 3 #define SKB_DATA_ALIGN(X) (((X) + (SMP_CACHE_BYTES - 1)) & \ ~(SMP_CACHE_BYTES - 1)) -#define SKB_MAX_ORDER(X, ORDER) (((PAGE_SIZE << (ORDER)) - (X) - \ - sizeof(struct skb_shared_info)) & \ - ~(SMP_CACHE_BYTES - 1)) +#define SKB_WITH_OVERHEAD(X) \ + (((X) - sizeof(struct skb_shared_info)) & \ + ~(SMP_CACHE_BYTES - 1)) +#define SKB_MAX_ORDER(X, ORDER) \ + SKB_WITH_OVERHEAD((PAGE_SIZE << (ORDER)) - (X)) #define SKB_MAX_HEAD(X) (SKB_MAX_ORDER((X), 0)) #define SKB_MAX_ALLOC (SKB_MAX_ORDER(0, 2)) @@ -66,8 +70,8 @@ * NONE: skb is checksummed by protocol or csum is not required. * * PARTIAL: device is required to csum packet as seen by hard_start_xmit - * from skb->h.raw to the end and to record the checksum - * at skb->h.raw+skb->csum. + * from skb->transport_header to the end and to record the checksum + * at skb->transport_header + skb->csum. * * Device must show its capabilities in dev->features, set * at device setup time. @@ -83,12 +87,13 @@ */ struct net_device; +struct scatterlist; -#ifdef CONFIG_NETFILTER +#if defined(CONFIG_NF_CONNTRACK) || defined(CONFIG_NF_CONNTRACK_MODULE) struct nf_conntrack { atomic_t use; - void (*destroy)(struct nf_conntrack *); }; +#endif #ifdef CONFIG_BRIDGE_NETFILTER struct nf_bridge_info { @@ -103,8 +108,6 @@ struct nf_bridge_info { }; #endif -#endif - struct sk_buff_head { /* These two members must be first. */ struct sk_buff *next; @@ -156,11 +159,6 @@ struct skb_shared_info { #define SKB_DATAREF_SHIFT 16 #define SKB_DATAREF_MASK ((1 << SKB_DATAREF_SHIFT) - 1) -struct skb_timeval { - u32 off_sec; - u32 off_usec; -}; - enum { SKB_FCLONE_UNAVAILABLE, @@ -181,6 +179,16 @@ enum { SKB_GSO_TCPV6 = 1 << 4, }; +#if BITS_PER_LONG > 32 +#define NET_SKBUFF_DATA_USES_OFFSET 1 +#endif + +#ifdef NET_SKBUFF_DATA_USES_OFFSET +typedef unsigned int sk_buff_data_t; +#else +typedef unsigned char *sk_buff_data_t; +#endif + /** * struct sk_buff - socket buffer * @next: Next buffer in list @@ -190,15 +198,17 @@ enum { * @dev: Device we arrived on/are leaving by * @iif: ifindex of device we arrived on * @h: Transport layer header - * @nh: Network layer header - * @mac: Link layer header + * @network_header: Network layer header + * @mac_header: Link layer header * @dst: destination entry * @sp: the security path, used for xfrm * @cb: Control buffer. Free for use by every layer. Put private vars here * @len: Length of actual data * @data_len: Data length * @mac_len: Length of link layer header - * @csum: Checksum + * @csum: Checksum (must include start/offset pair) + * @csum_start: Offset from skb->head where checksumming should start + * @csum_offset: Offset from csum_start where checksum should be stored * @local_df: allow local fragmentation * @cloned: Head may be cloned (check refcnt to be sure) * @nohdr: Payload reference only, must not modify header @@ -233,32 +243,11 @@ struct sk_buff { struct sk_buff *prev; struct sock *sk; - struct skb_timeval tstamp; + ktime_t tstamp; struct net_device *dev; int iif; /* 4 byte hole on 64 bit*/ - union { - struct tcphdr *th; - struct udphdr *uh; - struct icmphdr *icmph; - struct igmphdr *igmph; - struct iphdr *ipiph; - struct ipv6hdr *ipv6h; - unsigned char *raw; - } h; - - union { - struct iphdr *iph; - struct ipv6hdr *ipv6h; - struct arphdr *arph; - unsigned char *raw; - } nh; - - union { - unsigned char *raw; - } mac; - struct dst_entry *dst; struct sec_path *sp; @@ -275,7 +264,10 @@ struct sk_buff { mac_len; union { __wsum csum; - __u32 csum_offset; + struct { + __u16 csum_start; + __u16 csum_offset; + }; }; __u32 priority; __u8 local_df:1, @@ -289,15 +281,13 @@ struct sk_buff { __be16 protocol; void (*destructor)(struct sk_buff *skb); -#ifdef CONFIG_NETFILTER - struct nf_conntrack *nfct; #if defined(CONFIG_NF_CONNTRACK) || defined(CONFIG_NF_CONNTRACK_MODULE) + struct nf_conntrack *nfct; struct sk_buff *nfct_reasm; #endif #ifdef CONFIG_BRIDGE_NETFILTER struct nf_bridge_info *nf_bridge; #endif -#endif /* CONFIG_NETFILTER */ #ifdef CONFIG_NET_SCHED __u16 tc_index; /* traffic control index */ #ifdef CONFIG_NET_CLS_ACT @@ -313,13 +303,16 @@ struct sk_buff { __u32 mark; + sk_buff_data_t transport_header; + sk_buff_data_t network_header; + sk_buff_data_t mac_header; /* These elements must be at the end, see alloc_skb() for details. */ + sk_buff_data_t tail; + sk_buff_data_t end; + unsigned char *head, + *data; unsigned int truesize; atomic_t users; - unsigned char *head, - *data, - *tail, - *end; }; #ifdef __KERNEL__ @@ -361,6 +354,11 @@ extern struct sk_buff *skb_realloc_headr extern struct sk_buff *skb_copy_expand(const struct sk_buff *skb, int newheadroom, int newtailroom, gfp_t priority); +extern int skb_to_sgvec(struct sk_buff *skb, + struct scatterlist *sg, int offset, + int len); +extern int skb_cow_data(struct sk_buff *skb, int tailbits, + struct sk_buff **trailer); extern int skb_pad(struct sk_buff *skb, int pad); #define dev_kfree_skb(a) kfree_skb(a) extern void skb_over_panic(struct sk_buff *skb, int len, @@ -402,8 +400,20 @@ extern unsigned int skb_find_text(stru unsigned int to, struct ts_config *config, struct ts_state *state); +#ifdef NET_SKBUFF_DATA_USES_OFFSET +static inline unsigned char *skb_end_pointer(const struct sk_buff *skb) +{ + return skb->head + skb->end; +} +#else +static inline unsigned char *skb_end_pointer(const struct sk_buff *skb) +{ + return skb->end; +} +#endif + /* Internal */ -#define skb_shinfo(SKB) ((struct skb_shared_info *)((SKB)->end)) +#define skb_shinfo(SKB) ((struct skb_shared_info *)(skb_end_pointer(SKB))) /** * skb_queue_empty - check if a queue is empty @@ -822,12 +832,46 @@ static inline void skb_fill_page_desc(st #define SKB_FRAG_ASSERT(skb) BUG_ON(skb_shinfo(skb)->frag_list) #define SKB_LINEAR_ASSERT(skb) BUG_ON(skb_is_nonlinear(skb)) +#ifdef NET_SKBUFF_DATA_USES_OFFSET +static inline unsigned char *skb_tail_pointer(const struct sk_buff *skb) +{ + return skb->head + skb->tail; +} + +static inline void skb_reset_tail_pointer(struct sk_buff *skb) +{ + skb->tail = skb->data - skb->head; +} + +static inline void skb_set_tail_pointer(struct sk_buff *skb, const int offset) +{ + skb_reset_tail_pointer(skb); + skb->tail += offset; +} +#else /* NET_SKBUFF_DATA_USES_OFFSET */ +static inline unsigned char *skb_tail_pointer(const struct sk_buff *skb) +{ + return skb->tail; +} + +static inline void skb_reset_tail_pointer(struct sk_buff *skb) +{ + skb->tail = skb->data; +} + +static inline void skb_set_tail_pointer(struct sk_buff *skb, const int offset) +{ + skb->tail = skb->data + offset; +} + +#endif /* NET_SKBUFF_DATA_USES_OFFSET */ + /* * Add data to an sk_buff */ static inline unsigned char *__skb_put(struct sk_buff *skb, unsigned int len) { - unsigned char *tmp = skb->tail; + unsigned char *tmp = skb_tail_pointer(skb); SKB_LINEAR_ASSERT(skb); skb->tail += len; skb->len += len; @@ -845,11 +889,11 @@ static inline unsigned char *__skb_put(s */ static inline unsigned char *skb_put(struct sk_buff *skb, unsigned int len) { - unsigned char *tmp = skb->tail; + unsigned char *tmp = skb_tail_pointer(skb); SKB_LINEAR_ASSERT(skb); skb->tail += len; skb->len += len; - if (unlikely(skb->tail>skb->end)) + if (unlikely(skb->tail > skb->end)) skb_over_panic(skb, len, current_text_addr()); return tmp; } @@ -962,6 +1006,130 @@ static inline void skb_reserve(struct sk skb->tail += len; } +#ifdef NET_SKBUFF_DATA_USES_OFFSET +static inline unsigned char *skb_transport_header(const struct sk_buff *skb) +{ + return skb->head + skb->transport_header; +} + +static inline void skb_reset_transport_header(struct sk_buff *skb) +{ + skb->transport_header = skb->data - skb->head; +} + +static inline void skb_set_transport_header(struct sk_buff *skb, + const int offset) +{ + skb_reset_transport_header(skb); + skb->transport_header += offset; +} + +static inline unsigned char *skb_network_header(const struct sk_buff *skb) +{ + return skb->head + skb->network_header; +} + +static inline void skb_reset_network_header(struct sk_buff *skb) +{ + skb->network_header = skb->data - skb->head; +} + +static inline void skb_set_network_header(struct sk_buff *skb, const int offset) +{ + skb_reset_network_header(skb); + skb->network_header += offset; +} + +static inline unsigned char *skb_mac_header(const struct sk_buff *skb) +{ + return skb->head + skb->mac_header; +} + +static inline int skb_mac_header_was_set(const struct sk_buff *skb) +{ + return skb->mac_header != ~0U; +} + +static inline void skb_reset_mac_header(struct sk_buff *skb) +{ + skb->mac_header = skb->data - skb->head; +} + +static inline void skb_set_mac_header(struct sk_buff *skb, const int offset) +{ + skb_reset_mac_header(skb); + skb->mac_header += offset; +} + +#else /* NET_SKBUFF_DATA_USES_OFFSET */ + +static inline unsigned char *skb_transport_header(const struct sk_buff *skb) +{ + return skb->transport_header; +} + +static inline void skb_reset_transport_header(struct sk_buff *skb) +{ + skb->transport_header = skb->data; +} + +static inline void skb_set_transport_header(struct sk_buff *skb, + const int offset) +{ + skb->transport_header = skb->data + offset; +} + +static inline unsigned char *skb_network_header(const struct sk_buff *skb) +{ + return skb->network_header; +} + +static inline void skb_reset_network_header(struct sk_buff *skb) +{ + skb->network_header = skb->data; +} + +static inline void skb_set_network_header(struct sk_buff *skb, const int offset) +{ + skb->network_header = skb->data + offset; +} + +static inline unsigned char *skb_mac_header(const struct sk_buff *skb) +{ + return skb->mac_header; +} + +static inline int skb_mac_header_was_set(const struct sk_buff *skb) +{ + return skb->mac_header != NULL; +} + +static inline void skb_reset_mac_header(struct sk_buff *skb) +{ + skb->mac_header = skb->data; +} + +static inline void skb_set_mac_header(struct sk_buff *skb, const int offset) +{ + skb->mac_header = skb->data + offset; +} +#endif /* NET_SKBUFF_DATA_USES_OFFSET */ + +static inline int skb_transport_offset(const struct sk_buff *skb) +{ + return skb_transport_header(skb) - skb->data; +} + +static inline u32 skb_network_header_len(const struct sk_buff *skb) +{ + return skb->transport_header - skb->network_header; +} + +static inline int skb_network_offset(const struct sk_buff *skb) +{ + return skb_network_header(skb) - skb->data; +} + /* * CPUs often take a performance hit when accessing unaligned memory * locations. The actual performance hit varies, it can be small if the @@ -1013,8 +1181,8 @@ static inline void __skb_trim(struct sk_ WARN_ON(1); return; } - skb->len = len; - skb->tail = skb->data + len; + skb->len = len; + skb_set_tail_pointer(skb, len); } /** @@ -1326,8 +1494,8 @@ extern __wsum skb_checksum(const int len, __wsum csum); extern int skb_copy_bits(const struct sk_buff *skb, int offset, void *to, int len); -extern int skb_store_bits(const struct sk_buff *skb, int offset, - void *from, int len); +extern int skb_store_bits(struct sk_buff *skb, int offset, + const void *from, int len); extern __wsum skb_copy_and_csum_bits(const struct sk_buff *skb, int offset, u8 *to, int len, __wsum csum); @@ -1351,8 +1519,36 @@ static inline void *skb_header_pointer(c return buffer; } +static inline void skb_copy_from_linear_data(const struct sk_buff *skb, + void *to, + const unsigned int len) +{ + memcpy(to, skb->data, len); +} + +static inline void skb_copy_from_linear_data_offset(const struct sk_buff *skb, + const int offset, void *to, + const unsigned int len) +{ + memcpy(to, skb->data + offset, len); +} + +static inline void skb_copy_to_linear_data(struct sk_buff *skb, + const void *from, + const unsigned int len) +{ + memcpy(skb->data, from, len); +} + +static inline void skb_copy_to_linear_data_offset(struct sk_buff *skb, + const int offset, + const void *from, + const unsigned int len) +{ + memcpy(skb->data + offset, from, len); +} + extern void skb_init(void); -extern void skb_add_mtu(int mtu); /** * skb_get_timestamp - get timestamp from a skb @@ -1365,29 +1561,23 @@ extern void skb_add_mtu(int mtu); */ static inline void skb_get_timestamp(const struct sk_buff *skb, struct timeval *stamp) { - stamp->tv_sec = skb->tstamp.off_sec; - stamp->tv_usec = skb->tstamp.off_usec; + *stamp = ktime_to_timeval(skb->tstamp); } -/** - * skb_set_timestamp - set timestamp of a skb - * @skb: skb to set stamp of - * @stamp: pointer to struct timeval to get stamp from - * - * Timestamps are stored in the skb as offsets to a base timestamp. - * This function converts a struct timeval to an offset and stores - * it in the skb. - */ -static inline void skb_set_timestamp(struct sk_buff *skb, const struct timeval *stamp) +static inline void __net_timestamp(struct sk_buff *skb) { - skb->tstamp.off_sec = stamp->tv_sec; - skb->tstamp.off_usec = stamp->tv_usec; + skb->tstamp = ktime_get_real(); } -extern void __net_timestamp(struct sk_buff *skb); +extern __sum16 __skb_checksum_complete_head(struct sk_buff *skb, int len); extern __sum16 __skb_checksum_complete(struct sk_buff *skb); +static inline int skb_csum_unnecessary(const struct sk_buff *skb) +{ + return skb->ip_summed & CHECKSUM_UNNECESSARY; +} + /** * skb_checksum_complete - Calculate checksum of an entire packet * @skb: packet to process @@ -1406,22 +1596,22 @@ extern __sum16 __skb_checksum_complete(s */ static inline unsigned int skb_checksum_complete(struct sk_buff *skb) { - return skb->ip_summed != CHECKSUM_UNNECESSARY && - __skb_checksum_complete(skb); + return skb_csum_unnecessary(skb) ? + 0 : __skb_checksum_complete(skb); } -#ifdef CONFIG_NETFILTER +#if defined(CONFIG_NF_CONNTRACK) || defined(CONFIG_NF_CONNTRACK_MODULE) +extern void nf_conntrack_destroy(struct nf_conntrack *nfct); static inline void nf_conntrack_put(struct nf_conntrack *nfct) { if (nfct && atomic_dec_and_test(&nfct->use)) - nfct->destroy(nfct); + nf_conntrack_destroy(nfct); } static inline void nf_conntrack_get(struct nf_conntrack *nfct) { if (nfct) atomic_inc(&nfct->use); } -#if defined(CONFIG_NF_CONNTRACK) || defined(CONFIG_NF_CONNTRACK_MODULE) static inline void nf_conntrack_get_reasm(struct sk_buff *skb) { if (skb) @@ -1447,9 +1637,9 @@ static inline void nf_bridge_get(struct #endif /* CONFIG_BRIDGE_NETFILTER */ static inline void nf_reset(struct sk_buff *skb) { +#if defined(CONFIG_NF_CONNTRACK) || defined(CONFIG_NF_CONNTRACK_MODULE) nf_conntrack_put(skb->nfct); skb->nfct = NULL; -#if defined(CONFIG_NF_CONNTRACK) || defined(CONFIG_NF_CONNTRACK_MODULE) nf_conntrack_put_reasm(skb->nfct_reasm); skb->nfct_reasm = NULL; #endif @@ -1459,9 +1649,33 @@ static inline void nf_reset(struct sk_bu #endif } -#else /* CONFIG_NETFILTER */ -static inline void nf_reset(struct sk_buff *skb) {} -#endif /* CONFIG_NETFILTER */ +/* Note: This doesn't put any conntrack and bridge info in dst. */ +static inline void __nf_copy(struct sk_buff *dst, const struct sk_buff *src) +{ +#if defined(CONFIG_NF_CONNTRACK) || defined(CONFIG_NF_CONNTRACK_MODULE) + dst->nfct = src->nfct; + nf_conntrack_get(src->nfct); + dst->nfctinfo = src->nfctinfo; + dst->nfct_reasm = src->nfct_reasm; + nf_conntrack_get_reasm(src->nfct_reasm); +#endif +#ifdef CONFIG_BRIDGE_NETFILTER + dst->nf_bridge = src->nf_bridge; + nf_bridge_get(src->nf_bridge); +#endif +} + +static inline void nf_copy(struct sk_buff *dst, const struct sk_buff *src) +{ +#if defined(CONFIG_NF_CONNTRACK) || defined(CONFIG_NF_CONNTRACK_MODULE) + nf_conntrack_put(dst->nfct); + nf_conntrack_put_reasm(dst->nfct_reasm); +#endif +#ifdef CONFIG_BRIDGE_NETFILTER + nf_bridge_put(dst->nf_bridge); +#endif + __nf_copy(dst, src); +} #ifdef CONFIG_NETWORK_SECMARK static inline void skb_copy_secmark(struct sk_buff *to, const struct sk_buff *from) @@ -1486,5 +1700,12 @@ static inline int skb_is_gso(const struc return skb_shinfo(skb)->gso_size; } +static inline void skb_forward_csum(struct sk_buff *skb) +{ + /* Unfortunately we don't support this one. Any brave souls? */ + if (skb->ip_summed == CHECKSUM_COMPLETE) + skb->ip_summed = CHECKSUM_NONE; +} + #endif /* __KERNEL__ */ #endif /* _LINUX_SKBUFF_H */ diff -puN include/linux/sysctl.h~git-net include/linux/sysctl.h --- a/include/linux/sysctl.h~git-net +++ a/include/linux/sysctl.h @@ -290,6 +290,7 @@ enum NET_CORE_BUDGET=19, NET_CORE_AEVENT_ETIME=20, NET_CORE_AEVENT_RSEQTH=21, + NET_CORE_WARNINGS=22, }; /* /proc/sys/net/ethernet */ @@ -438,6 +439,8 @@ enum NET_CIPSOV4_RBM_STRICTVALID=121, NET_TCP_AVAIL_CONG_CONTROL=122, NET_TCP_ALLOWED_CONG_CONTROL=123, + NET_TCP_MAX_SSTHRESH=124, + NET_TCP_FRTO_RESPONSE=125, }; enum { @@ -580,6 +583,7 @@ enum { NET_IPV6_RTR_PROBE_INTERVAL=21, NET_IPV6_ACCEPT_RA_RT_INFO_MAX_PLEN=22, NET_IPV6_PROXY_NDP=23, + NET_IPV6_OPTIMISTIC_DAD=24, __NET_IPV6_MAX }; @@ -788,6 +792,7 @@ enum { NET_BRIDGE_NF_CALL_IPTABLES = 2, NET_BRIDGE_NF_CALL_IP6TABLES = 3, NET_BRIDGE_NF_FILTER_VLAN_TAGGED = 4, + NET_BRIDGE_NF_FILTER_PPPOE_TAGGED = 5, }; /* CTL_FS names: */ diff -puN include/linux/tcp.h~git-net include/linux/tcp.h --- a/include/linux/tcp.h~git-net +++ a/include/linux/tcp.h @@ -178,6 +178,21 @@ struct tcp_md5sig { #include #include +static inline struct tcphdr *tcp_hdr(const struct sk_buff *skb) +{ + return (struct tcphdr *)skb_transport_header(skb); +} + +static inline unsigned int tcp_hdrlen(const struct sk_buff *skb) +{ + return tcp_hdr(skb)->doff * 4; +} + +static inline unsigned int tcp_optlen(const struct sk_buff *skb) +{ + return (tcp_hdr(skb)->doff - 5) * 4; +} + /* This defines a selective acknowledgement block. */ struct tcp_sack_block_wire { __be32 start_seq; @@ -242,6 +257,8 @@ struct tcp_sock { * See RFC793 and RFC1122. The RFC writes these in capitals. */ u32 rcv_nxt; /* What we want to receive next */ + u32 copied_seq; /* Head of yet unread data */ + u32 rcv_wup; /* rcv_nxt on last window update sent */ u32 snd_nxt; /* Next sequence we send */ u32 snd_una; /* First byte we want an ack for */ @@ -300,17 +317,15 @@ struct tcp_sock { u32 snd_ssthresh; /* Slow start size threshold */ u32 snd_cwnd; /* Sending congestion window */ u16 snd_cwnd_cnt; /* Linear increase counter */ - u16 snd_cwnd_clamp; /* Do not allow snd_cwnd to grow above this */ + u32 snd_cwnd_clamp; /* Do not allow snd_cwnd to grow above this */ u32 snd_cwnd_used; u32 snd_cwnd_stamp; struct sk_buff_head out_of_order_queue; /* Out of order segments go here */ u32 rcv_wnd; /* Current receiver window */ - u32 rcv_wup; /* rcv_nxt on last window update sent */ u32 write_seq; /* Tail(+1) of data held in tcp send buffer */ u32 pushed_seq; /* Last pushed seq, required to talk to windows */ - u32 copied_seq; /* Head of yet unread data */ /* SACKs data */ struct tcp_sack_block duplicate_sack[1]; /* D-SACK block */ diff -puN include/linux/udp.h~git-net include/linux/udp.h --- a/include/linux/udp.h~git-net +++ a/include/linux/udp.h @@ -26,6 +26,15 @@ struct udphdr { __sum16 check; }; +#ifdef __KERNEL__ +#include + +static inline struct udphdr *udp_hdr(const struct sk_buff *skb) +{ + return (struct udphdr *)skb_transport_header(skb); +} +#endif + /* UDP socket options */ #define UDP_CORK 1 /* Never send partially complete segments */ #define UDP_ENCAP 100 /* Set the socket to accept encapsulated packets */ diff -puN include/net/addrconf.h~git-net include/net/addrconf.h --- a/include/net/addrconf.h~git-net +++ a/include/net/addrconf.h @@ -73,7 +73,9 @@ extern int ipv6_get_saddr(struct dst_e extern int ipv6_dev_get_saddr(struct net_device *dev, struct in6_addr *daddr, struct in6_addr *saddr); -extern int ipv6_get_lladdr(struct net_device *dev, struct in6_addr *); +extern int ipv6_get_lladdr(struct net_device *dev, + struct in6_addr *addr, + unsigned char banned_flags); extern int ipv6_rcv_saddr_equal(const struct sock *sk, const struct sock *sk2); extern void addrconf_join_solict(struct net_device *dev, diff -puN include/net/ax25.h~git-net include/net/ax25.h --- a/include/net/ax25.h~git-net +++ a/include/net/ax25.h @@ -263,8 +263,8 @@ static __inline__ void ax25_cb_put(ax25_ static inline __be16 ax25_type_trans(struct sk_buff *skb, struct net_device *dev) { skb->dev = dev; + skb_reset_mac_header(skb); skb->pkt_type = PACKET_HOST; - skb->mac.raw = skb->data; return htons(ETH_P_AX25); } diff -puN include/net/bluetooth/hci.h~git-net include/net/bluetooth/hci.h --- a/include/net/bluetooth/hci.h~git-net +++ a/include/net/bluetooth/hci.h @@ -709,6 +709,24 @@ struct hci_sco_hdr { __u8 dlen; } __attribute__ ((packed)); +#ifdef __KERNEL__ +#include +static inline struct hci_event_hdr *hci_event_hdr(const struct sk_buff *skb) +{ + return (struct hci_event_hdr *)skb->data; +} + +static inline struct hci_acl_hdr *hci_acl_hdr(const struct sk_buff *skb) +{ + return (struct hci_acl_hdr *)skb->data; +} + +static inline struct hci_sco_hdr *hci_sco_hdr(const struct sk_buff *skb) +{ + return (struct hci_sco_hdr *)skb->data; +} +#endif + /* Command opcode pack/unpack */ #define hci_opcode_pack(ogf, ocf) (__u16) ((ocf & 0x03ff)|(ogf << 10)) #define hci_opcode_ogf(op) (op >> 10) diff -puN /dev/null include/net/cfg80211.h --- /dev/null +++ a/include/net/cfg80211.h @@ -0,0 +1,36 @@ +#ifndef __NET_CFG80211_H +#define __NET_CFG80211_H + +#include +#include +#include + +/* + * 802.11 configuration in-kernel interface + * + * Copyright 2006 Johannes Berg + */ + +/* from net/wireless.h */ +struct wiphy; + +/** + * struct cfg80211_ops - backend description for wireless configuration + * + * This struct is registered by fullmac card drivers and/or wireless stacks + * in order to handle configuration requests on their interfaces. + * + * All callbacks except where otherwise noted should return 0 + * on success or a negative error code. + * + * @add_virtual_intf: create a new virtual interface with the given name + * + * @del_virtual_intf: remove the virtual interface determined by ifindex. + */ +struct cfg80211_ops { + int (*add_virtual_intf)(struct wiphy *wiphy, char *name, + unsigned int type); + int (*del_virtual_intf)(struct wiphy *wiphy, int ifindex); +}; + +#endif /* __NET_CFG80211_H */ diff -puN include/net/cipso_ipv4.h~git-net include/net/cipso_ipv4.h --- a/include/net/cipso_ipv4.h~git-net +++ a/include/net/cipso_ipv4.h @@ -120,7 +120,7 @@ extern int cipso_v4_rbm_strictvalid; */ #define CIPSO_V4_OPTEXIST(x) (IPCB(x)->opt.cipso != 0) -#define CIPSO_V4_OPTPTR(x) ((x)->nh.raw + IPCB(x)->opt.cipso) +#define CIPSO_V4_OPTPTR(x) (skb_network_header(x) + IPCB(x)->opt.cipso) /* * DOI List Functions diff -puN include/net/compat.h~git-net include/net/compat.h --- a/include/net/compat.h~git-net +++ a/include/net/compat.h @@ -25,6 +25,7 @@ struct compat_cmsghdr { }; extern int compat_sock_get_timestamp(struct sock *, struct timeval __user *); +extern int compat_sock_get_timestampns(struct sock *, struct timespec __user *); #else /* defined(CONFIG_COMPAT) */ #define compat_msghdr msghdr /* to avoid compiler warnings */ diff -puN include/net/dn_fib.h~git-net include/net/dn_fib.h --- a/include/net/dn_fib.h~git-net +++ a/include/net/dn_fib.h @@ -148,17 +148,8 @@ extern void dn_fib_rules_cleanup(void); extern unsigned dnet_addr_type(__le16 addr); extern int dn_fib_lookup(struct flowi *fl, struct dn_fib_res *res); -/* - * rtnetlink interface - */ -extern int dn_fib_rtm_delroute(struct sk_buff *skb, struct nlmsghdr *nlh, void *arg); -extern int dn_fib_rtm_newroute(struct sk_buff *skb, struct nlmsghdr *nlh, void *arg); extern int dn_fib_dump(struct sk_buff *skb, struct netlink_callback *cb); -extern int dn_fib_rtm_delrule(struct sk_buff *skb, struct nlmsghdr *nlh, void *arg); -extern int dn_fib_rtm_newrule(struct sk_buff *skb, struct nlmsghdr *nlh, void *arg); -extern int dn_fib_dump_rules(struct sk_buff *skb, struct netlink_callback *cb); - extern void dn_fib_free_info(struct dn_fib_info *fi); static inline void dn_fib_info_put(struct dn_fib_info *fi) diff -puN include/net/dn_route.h~git-net include/net/dn_route.h --- a/include/net/dn_route.h~git-net +++ a/include/net/dn_route.h @@ -18,7 +18,6 @@ extern struct sk_buff *dn_alloc_skb(struct sock *sk, int size, gfp_t pri); extern int dn_route_output_sock(struct dst_entry **pprt, struct flowi *, struct sock *sk, int flags); extern int dn_cache_dump(struct sk_buff *skb, struct netlink_callback *cb); -extern int dn_cache_getroute(struct sk_buff *skb, struct nlmsghdr *nlh, void *arg); extern void dn_rt_cache_flush(int delay); /* Masks for flags field */ diff -puN include/net/esp.h~git-net include/net/esp.h --- a/include/net/esp.h~git-net +++ a/include/net/esp.h @@ -40,8 +40,6 @@ struct esp_data } auth; }; -extern int skb_to_sgvec(struct sk_buff *skb, struct scatterlist *sg, int offset, int len); -extern int skb_cow_data(struct sk_buff *skb, int tailbits, struct sk_buff **trailer); extern void *pskb_put(struct sk_buff *skb, struct sk_buff *tail, int len); static inline int esp_mac_digest(struct esp_data *esp, struct sk_buff *skb, diff -puN include/net/fib_rules.h~git-net include/net/fib_rules.h --- a/include/net/fib_rules.h~git-net +++ a/include/net/fib_rules.h @@ -5,7 +5,7 @@ #include #include #include -#include +#include struct fib_rule { @@ -19,6 +19,8 @@ struct fib_rule u32 flags; u32 table; u8 action; + u32 target; + struct fib_rule * ctarget; struct rcu_head rcu; }; @@ -35,6 +37,8 @@ struct fib_rules_ops struct list_head list; int rule_size; int addr_size; + int unresolved_rules; + int nr_goto_rules; int (*action)(struct fib_rule *, struct flowi *, int, @@ -55,6 +59,10 @@ struct fib_rules_ops u32 (*default_pref)(void); size_t (*nlmsg_payload)(struct fib_rule *); + /* Called after modifications to the rules set, must flush + * the route cache if one exists. */ + void (*flush_cache)(void); + int nlgroup; struct nla_policy *policy; struct list_head *rules_list; @@ -66,7 +74,8 @@ struct fib_rules_ops [FRA_PRIORITY] = { .type = NLA_U32 }, \ [FRA_FWMARK] = { .type = NLA_U32 }, \ [FRA_FWMASK] = { .type = NLA_U32 }, \ - [FRA_TABLE] = { .type = NLA_U32 } + [FRA_TABLE] = { .type = NLA_U32 }, \ + [FRA_GOTO] = { .type = NLA_U32 } static inline void fib_rule_get(struct fib_rule *rule) { @@ -98,11 +107,4 @@ extern int fib_rules_unregister(struct extern int fib_rules_lookup(struct fib_rules_ops *, struct flowi *, int flags, struct fib_lookup_arg *); - -extern int fib_nl_newrule(struct sk_buff *, - struct nlmsghdr *, void *); -extern int fib_nl_delrule(struct sk_buff *, - struct nlmsghdr *, void *); -extern int fib_rules_dump(struct sk_buff *, - struct netlink_callback *, int); #endif diff -puN include/net/inet6_hashtables.h~git-net include/net/inet6_hashtables.h --- a/include/net/inet6_hashtables.h~git-net +++ a/include/net/inet6_hashtables.h @@ -19,6 +19,9 @@ #include #include #include +#include + +#include #include @@ -28,12 +31,11 @@ struct inet_hashinfo; static inline unsigned int inet6_ehashfn(const struct in6_addr *laddr, const u16 lport, const struct in6_addr *faddr, const __be16 fport) { - unsigned int hashent = (lport ^ (__force u16)fport); + u32 ports = (lport ^ (__force u16)fport); - hashent ^= (__force u32)(laddr->s6_addr32[3] ^ faddr->s6_addr32[3]); - hashent ^= hashent >> 16; - hashent ^= hashent >> 8; - return hashent; + return jhash_3words((__force u32)laddr->s6_addr32[3], + (__force u32)faddr->s6_addr32[3], + ports, inet_ehash_secret); } static inline int inet6_sk_ehashfn(const struct sock *sk) diff -puN include/net/inet_ecn.h~git-net include/net/inet_ecn.h --- a/include/net/inet_ecn.h~git-net +++ a/include/net/inet_ecn.h @@ -114,13 +114,13 @@ static inline int INET_ECN_set_ce(struct { switch (skb->protocol) { case __constant_htons(ETH_P_IP): - if (skb->nh.raw + sizeof(struct iphdr) <= skb->tail) - return IP_ECN_set_ce(skb->nh.iph); + if (skb->network_header + sizeof(struct iphdr) <= skb->tail) + return IP_ECN_set_ce(ip_hdr(skb)); break; case __constant_htons(ETH_P_IPV6): - if (skb->nh.raw + sizeof(struct ipv6hdr) <= skb->tail) - return IP6_ECN_set_ce(skb->nh.ipv6h); + if (skb->network_header + sizeof(struct ipv6hdr) <= skb->tail) + return IP6_ECN_set_ce(ipv6_hdr(skb)); break; } diff -puN include/net/inet_sock.h~git-net include/net/inet_sock.h --- a/include/net/inet_sock.h~git-net +++ a/include/net/inet_sock.h @@ -19,6 +19,7 @@ #include #include +#include #include #include @@ -167,13 +168,15 @@ static inline void inet_sk_copy_descenda extern int inet_sk_rebuild_header(struct sock *sk); +extern u32 inet_ehash_secret; +extern void build_ehash_secret(void); + static inline unsigned int inet_ehashfn(const __be32 laddr, const __u16 lport, const __be32 faddr, const __be16 fport) { - unsigned int h = ((__force __u32)laddr ^ lport) ^ ((__force __u32)faddr ^ (__force __u32)fport); - h ^= h >> 16; - h ^= h >> 8; - return h; + return jhash_2words((__force __u32) laddr ^ (__force __u32) faddr, + ((__u32) lport) << 16 | (__force __u32)fport, + inet_ehash_secret); } static inline int inet_sk_ehashfn(const struct sock *sk) diff -puN include/net/ip.h~git-net include/net/ip.h --- a/include/net/ip.h~git-net +++ a/include/net/ip.h @@ -25,6 +25,7 @@ #include #include #include +#include #include #include @@ -43,6 +44,11 @@ struct inet_skb_parm #define IPSKB_REROUTED 16 }; +static inline unsigned int ip_hdrlen(const struct sk_buff *skb) +{ + return ip_hdr(skb)->ihl * 4; +} + struct ipcm_cookie { __be32 addr; @@ -74,7 +80,6 @@ struct msghdr; struct net_device; struct packet_type; struct rtable; -struct sk_buff; struct sockaddr; extern void ip_mc_dropsocket(struct sock *); @@ -161,6 +166,9 @@ DECLARE_SNMP_STAT(struct linux_mib, net_ #define NET_ADD_STATS_BH(field, adnd) SNMP_ADD_STATS_BH(net_statistics, field, adnd) #define NET_ADD_STATS_USER(field, adnd) SNMP_ADD_STATS_USER(net_statistics, field, adnd) +extern int snmp_mib_init(void *ptr[2], size_t mibsize, size_t mibalign); +extern void snmp_mib_free(void *ptr[2]); + extern int sysctl_local_port_range[2]; extern int sysctl_ip_default_ttl; extern int sysctl_ip_nonlocal_bind; diff -puN include/net/ip6_fib.h~git-net include/net/ip6_fib.h --- a/include/net/ip6_fib.h~git-net +++ a/include/net/ip6_fib.h @@ -219,8 +219,6 @@ extern void fib6_init(void); extern void fib6_rules_init(void); extern void fib6_rules_cleanup(void); -extern int fib6_rules_dump(struct sk_buff *, - struct netlink_callback *); #endif #endif diff -puN include/net/ip6_route.h~git-net include/net/ip6_route.h --- a/include/net/ip6_route.h~git-net +++ a/include/net/ip6_route.h @@ -116,12 +116,7 @@ extern void rt6_pmtu_discovery(struct struct net_device *dev, u32 pmtu); -struct nlmsghdr; struct netlink_callback; -extern int inet6_dump_fib(struct sk_buff *skb, struct netlink_callback *cb); -extern int inet6_rtm_newroute(struct sk_buff *skb, struct nlmsghdr* nlh, void *arg); -extern int inet6_rtm_delroute(struct sk_buff *skb, struct nlmsghdr* nlh, void *arg); -extern int inet6_rtm_getroute(struct sk_buff *skb, struct nlmsghdr* nlh, void *arg); struct rt6_rtnl_dump_arg { diff -puN include/net/ip_fib.h~git-net include/net/ip_fib.h --- a/include/net/ip_fib.h~git-net +++ a/include/net/ip_fib.h @@ -215,10 +215,6 @@ extern void fib_select_default(const str /* Exported by fib_frontend.c */ extern struct nla_policy rtm_ipv4_policy[]; extern void ip_fib_init(void); -extern int inet_rtm_delroute(struct sk_buff *skb, struct nlmsghdr* nlh, void *arg); -extern int inet_rtm_newroute(struct sk_buff *skb, struct nlmsghdr* nlh, void *arg); -extern int inet_rtm_getroute(struct sk_buff *skb, struct nlmsghdr* nlh, void *arg); -extern int inet_dump_fib(struct sk_buff *skb, struct netlink_callback *cb); extern int fib_validate_source(__be32 src, __be32 dst, u8 tos, int oif, struct net_device *dev, __be32 *spec_dst, u32 *itag); extern void fib_select_multipath(const struct flowi *flp, struct fib_result *res); @@ -235,8 +231,6 @@ extern __be32 __fib_res_prefsrc(struct extern struct fib_table *fib_hash_init(u32 id); #ifdef CONFIG_IP_MULTIPLE_TABLES -extern int fib4_rules_dump(struct sk_buff *skb, struct netlink_callback *cb); - extern void __init fib4_rules_init(void); #ifdef CONFIG_NET_CLS_ROUTE diff -puN include/net/ipv6.h~git-net include/net/ipv6.h --- a/include/net/ipv6.h~git-net +++ a/include/net/ipv6.h @@ -172,6 +172,7 @@ int snmp6_alloc_dev(struct inet6_dev *id int snmp6_free_dev(struct inet6_dev *idev); int snmp6_mib_init(void *ptr[2], size_t mibsize, size_t mibalign); void snmp6_mib_free(void *ptr[2]); +void snmp6_fill_stats(u64 *stats, struct inet6_dev *idev, int attrtype, int bytes); struct ip6_ra_chain { diff -puN include/net/ipx.h~git-net include/net/ipx.h --- a/include/net/ipx.h~git-net +++ a/include/net/ipx.h @@ -43,7 +43,7 @@ struct ipxhdr { static __inline__ struct ipxhdr *ipx_hdr(struct sk_buff *skb) { - return (struct ipxhdr *)skb->h.raw; + return (struct ipxhdr *)skb_transport_header(skb); } struct ipx_interface { diff -puN include/net/iw_handler.h~git-net include/net/iw_handler.h --- a/include/net/iw_handler.h~git-net +++ a/include/net/iw_handler.h @@ -440,16 +440,6 @@ extern int dev_get_wireless_info(char * /* Handle IOCTLs, called in net/core/dev.c */ extern int wireless_process_ioctl(struct ifreq *ifr, unsigned int cmd); -/* Handle RtNetlink requests, called in net/core/rtnetlink.c */ -extern int wireless_rtnetlink_set(struct net_device * dev, - char * data, - int len); -extern int wireless_rtnetlink_get(struct net_device * dev, - char * data, - int len, - char ** p_buf, - int * p_len); - /* Second : functions that may be called by driver modules */ /* Send a single event to user space */ diff -puN include/net/llc_pdu.h~git-net include/net/llc_pdu.h --- a/include/net/llc_pdu.h~git-net +++ a/include/net/llc_pdu.h @@ -203,7 +203,7 @@ struct llc_pdu_sn { static inline struct llc_pdu_sn *llc_pdu_sn_hdr(struct sk_buff *skb) { - return (struct llc_pdu_sn *)skb->nh.raw; + return (struct llc_pdu_sn *)skb_network_header(skb); } /* Un-numbered PDU format (3 bytes in length) */ @@ -215,12 +215,7 @@ struct llc_pdu_un { static inline struct llc_pdu_un *llc_pdu_un_hdr(struct sk_buff *skb) { - return (struct llc_pdu_un *)skb->nh.raw; -} - -static inline void *llc_set_pdu_hdr(struct sk_buff *skb, void *ptr) -{ - return skb->nh.raw = ptr; + return (struct llc_pdu_un *)skb_network_header(skb); } /** @@ -237,7 +232,11 @@ static inline void llc_pdu_header_init(s u8 ssap, u8 dsap, u8 cr) { const int hlen = type == LLC_PDU_TYPE_U ? 3 : 4; - struct llc_pdu_un *pdu = llc_set_pdu_hdr(skb, skb_push(skb, hlen)); + struct llc_pdu_un *pdu; + + skb_push(skb, hlen); + skb_reset_network_header(skb); + pdu = llc_pdu_un_hdr(skb); pdu->dsap = dsap; pdu->ssap = ssap; pdu->ssap |= cr; diff -puN include/net/neighbour.h~git-net include/net/neighbour.h --- a/include/net/neighbour.h~git-net +++ a/include/net/neighbour.h @@ -24,6 +24,7 @@ #include #include +#include #define NUD_IN_TIMER (NUD_INCOMPLETE|NUD_REACHABLE|NUD_DELAY|NUD_PROBE) #define NUD_VALID (NUD_PERMANENT|NUD_NOARP|NUD_REACHABLE|NUD_PROBE|NUD_STALE|NUD_DELAY) @@ -213,16 +214,7 @@ extern void pneigh_enqueue(struct neig extern struct pneigh_entry *pneigh_lookup(struct neigh_table *tbl, const void *key, struct net_device *dev, int creat); extern int pneigh_delete(struct neigh_table *tbl, const void *key, struct net_device *dev); -struct netlink_callback; -struct nlmsghdr; -extern int neigh_dump_info(struct sk_buff *skb, struct netlink_callback *cb); -extern int neigh_add(struct sk_buff *skb, struct nlmsghdr *nlh, void *arg); -extern int neigh_delete(struct sk_buff *skb, struct nlmsghdr *nlh, void *arg); extern void neigh_app_ns(struct neighbour *n); - -extern int neightbl_dump_info(struct sk_buff *skb, struct netlink_callback *cb); -extern int neightbl_set(struct sk_buff *skb, struct nlmsghdr *nlh, void *arg); - extern void neigh_for_each(struct neigh_table *tbl, void (*cb)(struct neighbour *, void *), void *cookie); extern void __neigh_for_each_release(struct neigh_table *tbl, int (*cb)(struct neighbour *)); extern void pneigh_for_each(struct neigh_table *tbl, void (*cb)(struct pneigh_entry *)); diff -puN include/net/netfilter/nf_conntrack.h~git-net include/net/netfilter/nf_conntrack.h --- a/include/net/netfilter/nf_conntrack.h~git-net +++ a/include/net/netfilter/nf_conntrack.h @@ -250,6 +250,11 @@ static inline int nf_ct_is_dying(struct return test_bit(IPS_DYING_BIT, &ct->status); } +static inline int nf_ct_is_untracked(const struct sk_buff *skb) +{ + return (skb->nfct == &nf_conntrack_untracked.ct_general); +} + extern unsigned int nf_conntrack_htable_size; extern int nf_conntrack_checksum; extern atomic_t nf_conntrack_count; diff -puN include/net/netfilter/nf_conntrack_compat.h~git-net /dev/null --- a/include/net/netfilter/nf_conntrack_compat.h +++ /dev/null @@ -1,145 +0,0 @@ -#ifndef _NF_CONNTRACK_COMPAT_H -#define _NF_CONNTRACK_COMPAT_H - -#ifdef __KERNEL__ - -#if defined(CONFIG_IP_NF_CONNTRACK) || defined(CONFIG_IP_NF_CONNTRACK_MODULE) - -#include -#include - -#ifdef CONFIG_IP_NF_CONNTRACK_MARK -static inline u_int32_t *nf_ct_get_mark(const struct sk_buff *skb, - u_int32_t *ctinfo) -{ - struct ip_conntrack *ct = ip_conntrack_get(skb, ctinfo); - - if (ct) - return &ct->mark; - else - return NULL; -} -#endif /* CONFIG_IP_NF_CONNTRACK_MARK */ - -#ifdef CONFIG_IP_NF_CONNTRACK_SECMARK -static inline u_int32_t *nf_ct_get_secmark(const struct sk_buff *skb, - u_int32_t *ctinfo) -{ - struct ip_conntrack *ct = ip_conntrack_get(skb, ctinfo); - - if (ct) - return &ct->secmark; - else - return NULL; -} -#endif /* CONFIG_IP_NF_CONNTRACK_SECMARK */ - -#ifdef CONFIG_IP_NF_CT_ACCT -static inline struct ip_conntrack_counter * -nf_ct_get_counters(const struct sk_buff *skb) -{ - enum ip_conntrack_info ctinfo; - struct ip_conntrack *ct = ip_conntrack_get(skb, &ctinfo); - - if (ct) - return ct->counters; - else - return NULL; -} -#endif /* CONFIG_IP_NF_CT_ACCT */ - -static inline int nf_ct_is_untracked(const struct sk_buff *skb) -{ - return (skb->nfct == &ip_conntrack_untracked.ct_general); -} - -static inline void nf_ct_untrack(struct sk_buff *skb) -{ - skb->nfct = &ip_conntrack_untracked.ct_general; -} - -static inline int nf_ct_get_ctinfo(const struct sk_buff *skb, - enum ip_conntrack_info *ctinfo) -{ - struct ip_conntrack *ct = ip_conntrack_get(skb, ctinfo); - return (ct != NULL); -} - -static inline int nf_ct_l3proto_try_module_get(unsigned short l3proto) -{ - need_conntrack(); - return l3proto == PF_INET ? 0 : -1; -} - -static inline void nf_ct_l3proto_module_put(unsigned short l3proto) -{ -} - -#else /* CONFIG_IP_NF_CONNTRACK */ - -#include -#include - -#ifdef CONFIG_NF_CONNTRACK_MARK - -static inline u_int32_t *nf_ct_get_mark(const struct sk_buff *skb, - u_int32_t *ctinfo) -{ - struct nf_conn *ct = nf_ct_get(skb, ctinfo); - - if (ct) - return &ct->mark; - else - return NULL; -} -#endif /* CONFIG_NF_CONNTRACK_MARK */ - -#ifdef CONFIG_NF_CONNTRACK_SECMARK -static inline u_int32_t *nf_ct_get_secmark(const struct sk_buff *skb, - u_int32_t *ctinfo) -{ - struct nf_conn *ct = nf_ct_get(skb, ctinfo); - - if (ct) - return &ct->secmark; - else - return NULL; -} -#endif /* CONFIG_NF_CONNTRACK_MARK */ - -#ifdef CONFIG_NF_CT_ACCT -static inline struct ip_conntrack_counter * -nf_ct_get_counters(const struct sk_buff *skb) -{ - enum ip_conntrack_info ctinfo; - struct nf_conn *ct = nf_ct_get(skb, &ctinfo); - - if (ct) - return ct->counters; - else - return NULL; -} -#endif /* CONFIG_NF_CT_ACCT */ - -static inline int nf_ct_is_untracked(const struct sk_buff *skb) -{ - return (skb->nfct == &nf_conntrack_untracked.ct_general); -} - -static inline void nf_ct_untrack(struct sk_buff *skb) -{ - skb->nfct = &nf_conntrack_untracked.ct_general; -} - -static inline int nf_ct_get_ctinfo(const struct sk_buff *skb, - enum ip_conntrack_info *ctinfo) -{ - struct nf_conn *ct = nf_ct_get(skb, ctinfo); - return (ct != NULL); -} - -#endif /* CONFIG_IP_NF_CONNTRACK */ - -#endif /* __KERNEL__ */ - -#endif /* _NF_CONNTRACK_COMPAT_H */ diff -puN include/net/netfilter/nf_conntrack_core.h~git-net include/net/netfilter/nf_conntrack_core.h --- a/include/net/netfilter/nf_conntrack_core.h~git-net +++ a/include/net/netfilter/nf_conntrack_core.h @@ -27,6 +27,9 @@ extern unsigned int nf_conntrack_in(int extern int nf_conntrack_init(void); extern void nf_conntrack_cleanup(void); +extern int nf_conntrack_proto_init(void); +extern void nf_conntrack_proto_fini(void); + struct nf_conntrack_l3proto; extern struct nf_conntrack_l3proto *nf_ct_find_l3proto(u_int16_t pf); /* Like above, but you already have conntrack read lock. */ diff -puN include/net/netfilter/nf_conntrack_ecache.h~git-net include/net/netfilter/nf_conntrack_ecache.h --- a/include/net/netfilter/nf_conntrack_ecache.h~git-net +++ a/include/net/netfilter/nf_conntrack_ecache.h @@ -20,30 +20,8 @@ DECLARE_PER_CPU(struct nf_conntrack_ecac #define CONNTRACK_ECACHE(x) (__get_cpu_var(nf_conntrack_ecache).x) extern struct atomic_notifier_head nf_conntrack_chain; -extern struct atomic_notifier_head nf_conntrack_expect_chain; - -static inline int nf_conntrack_register_notifier(struct notifier_block *nb) -{ - return atomic_notifier_chain_register(&nf_conntrack_chain, nb); -} - -static inline int nf_conntrack_unregister_notifier(struct notifier_block *nb) -{ - return atomic_notifier_chain_unregister(&nf_conntrack_chain, nb); -} - -static inline int -nf_conntrack_expect_register_notifier(struct notifier_block *nb) -{ - return atomic_notifier_chain_register(&nf_conntrack_expect_chain, nb); -} - -static inline int -nf_conntrack_expect_unregister_notifier(struct notifier_block *nb) -{ - return atomic_notifier_chain_unregister(&nf_conntrack_expect_chain, - nb); -} +extern int nf_conntrack_register_notifier(struct notifier_block *nb); +extern int nf_conntrack_unregister_notifier(struct notifier_block *nb); extern void nf_ct_deliver_cached_events(const struct nf_conn *ct); extern void __nf_ct_event_cache_init(struct nf_conn *ct); @@ -71,6 +49,10 @@ static inline void nf_conntrack_event(en atomic_notifier_call_chain(&nf_conntrack_chain, event, ct); } +extern struct atomic_notifier_head nf_conntrack_expect_chain; +extern int nf_conntrack_expect_register_notifier(struct notifier_block *nb); +extern int nf_conntrack_expect_unregister_notifier(struct notifier_block *nb); + static inline void nf_conntrack_expect_event(enum ip_conntrack_expect_events event, struct nf_conntrack_expect *exp) diff -puN include/net/netfilter/nf_conntrack_l3proto.h~git-net include/net/netfilter/nf_conntrack_l3proto.h --- a/include/net/netfilter/nf_conntrack_l3proto.h~git-net +++ a/include/net/netfilter/nf_conntrack_l3proto.h @@ -90,10 +90,7 @@ extern struct nf_conntrack_l3proto *nf_c /* Protocol registration. */ extern int nf_conntrack_l3proto_register(struct nf_conntrack_l3proto *proto); extern void nf_conntrack_l3proto_unregister(struct nf_conntrack_l3proto *proto); - -extern struct nf_conntrack_l3proto * -nf_ct_l3proto_find_get(u_int16_t l3proto); - +extern struct nf_conntrack_l3proto *nf_ct_l3proto_find_get(u_int16_t l3proto); extern void nf_ct_l3proto_put(struct nf_conntrack_l3proto *p); /* Existing built-in protocols */ diff -puN include/net/netfilter/nf_nat_rule.h~git-net include/net/netfilter/nf_nat_rule.h --- a/include/net/netfilter/nf_nat_rule.h~git-net +++ a/include/net/netfilter/nf_nat_rule.h @@ -4,16 +4,6 @@ #include #include -/* Compatibility definitions for ipt_FOO modules */ -#define ip_nat_range nf_nat_range -#define ip_conntrack_tuple nf_conntrack_tuple -#define ip_conntrack_get nf_ct_get -#define ip_conntrack nf_conn -#define ip_nat_setup_info nf_nat_setup_info -#define ip_nat_multi_range_compat nf_nat_multi_range_compat -#define ip_ct_iterate_cleanup nf_ct_iterate_cleanup -#define IP_NF_ASSERT NF_CT_ASSERT - extern int nf_nat_rule_init(void) __init; extern void nf_nat_rule_cleanup(void); extern int nf_nat_rule_find(struct sk_buff **pskb, diff -puN include/net/netlink.h~git-net include/net/netlink.h --- a/include/net/netlink.h~git-net +++ a/include/net/netlink.h @@ -171,6 +171,7 @@ enum { NLA_MSECS, NLA_NESTED, NLA_NUL_STRING, + NLA_BINARY, __NLA_TYPE_MAX, }; @@ -188,12 +189,13 @@ enum { * NLA_STRING Maximum length of string * NLA_NUL_STRING Maximum length of string (excluding NUL) * NLA_FLAG Unused + * NLA_BINARY Maximum length of attribute payload * All other Exact length of attribute payload * * Example: * static struct nla_policy my_policy[ATTR_MAX+1] __read_mostly = { * [ATTR_FOO] = { .type = NLA_U16 }, - * [ATTR_BAR] = { .type = NLA_STRING, len = BARSIZ }, + * [ATTR_BAR] = { .type = NLA_STRING, .len = BARSIZ }, * [ATTR_BAZ] = { .len = sizeof(struct mystruct) }, * }; */ @@ -214,7 +216,7 @@ struct nl_info { extern void netlink_run_queue(struct sock *sk, unsigned int *qlen, int (*cb)(struct sk_buff *, - struct nlmsghdr *, int *)); + struct nlmsghdr *)); extern void netlink_queue_skip(struct nlmsghdr *nlh, struct sk_buff *skb); extern int nlmsg_notify(struct sock *sk, struct sk_buff *skb, @@ -525,7 +527,7 @@ static inline struct sk_buff *nlmsg_new( */ static inline int nlmsg_end(struct sk_buff *skb, struct nlmsghdr *nlh) { - nlh->nlmsg_len = skb->tail - (unsigned char *) nlh; + nlh->nlmsg_len = skb_tail_pointer(skb) - (unsigned char *)nlh; return skb->len; } @@ -538,7 +540,7 @@ static inline int nlmsg_end(struct sk_bu */ static inline void *nlmsg_get_pos(struct sk_buff *skb) { - return skb->tail; + return skb_tail_pointer(skb); } /** @@ -548,7 +550,7 @@ static inline void *nlmsg_get_pos(struct * * Trims the message to the provided mark. Returns -1. */ -static inline int nlmsg_trim(struct sk_buff *skb, void *mark) +static inline int nlmsg_trim(struct sk_buff *skb, const void *mark) { if (mark) skb_trim(skb, (unsigned char *) mark - skb->data); @@ -940,7 +942,7 @@ static inline unsigned long nla_get_msec */ static inline struct nlattr *nla_nest_start(struct sk_buff *skb, int attrtype) { - struct nlattr *start = (struct nlattr *) skb->tail; + struct nlattr *start = (struct nlattr *)skb_tail_pointer(skb); if (nla_put(skb, attrtype, 0, NULL) < 0) return NULL; @@ -960,7 +962,7 @@ static inline struct nlattr *nla_nest_st */ static inline int nla_nest_end(struct sk_buff *skb, struct nlattr *start) { - start->nla_len = skb->tail - (unsigned char *) start; + start->nla_len = skb_tail_pointer(skb) - (unsigned char *)start; return skb->len; } diff -puN include/net/pkt_cls.h~git-net include/net/pkt_cls.h --- a/include/net/pkt_cls.h~git-net +++ a/include/net/pkt_cls.h @@ -326,18 +326,18 @@ static inline unsigned char * tcf_get_ba case TCF_LAYER_LINK: return skb->data; case TCF_LAYER_NETWORK: - return skb->nh.raw; + return skb_network_header(skb); case TCF_LAYER_TRANSPORT: - return skb->h.raw; + return skb_transport_header(skb); } return NULL; } -static inline int tcf_valid_offset(struct sk_buff *skb, unsigned char *ptr, - int len) +static inline int tcf_valid_offset(const struct sk_buff *skb, + const unsigned char *ptr, const int len) { - return unlikely((ptr + len) < skb->tail && ptr > skb->head); + return unlikely((ptr + len) < skb_tail_pointer(skb) && ptr > skb->head); } #ifdef CONFIG_NET_CLS_IND diff -puN include/net/pkt_sched.h~git-net include/net/pkt_sched.h --- a/include/net/pkt_sched.h~git-net +++ a/include/net/pkt_sched.h @@ -2,6 +2,7 @@ #define __NET_PKT_SCHED_H #include +#include #include struct qdisc_walker @@ -12,8 +13,6 @@ struct qdisc_walker int (*fn)(struct Qdisc *, unsigned long cl, struct qdisc_walker *); }; -extern rwlock_t qdisc_tree_lock; - #define QDISC_ALIGNTO 32 #define QDISC_ALIGN(len) (((len) + QDISC_ALIGNTO-1) & ~(QDISC_ALIGNTO-1)) @@ -37,175 +36,38 @@ static inline void *qdisc_priv(struct Qd The things are not so bad, because we may use artifical clock evaluated by integration of network data flow in the most critical places. - - Note: we do not use fastgettimeofday. - The reason is that, when it is not the same thing as - gettimeofday, it returns invalid timestamp, which is - not updated, when net_bh is active. - */ - -/* General note about internal clock. - - Any clock source returns time intervals, measured in units - close to 1usec. With source CONFIG_NET_SCH_CLK_GETTIMEOFDAY it is precisely - microseconds, otherwise something close but different chosen to minimize - arithmetic cost. Ratio usec/internal untis in form nominator/denominator - may be read from /proc/net/psched. */ - -#ifdef CONFIG_NET_SCH_CLK_GETTIMEOFDAY - -typedef struct timeval psched_time_t; -typedef long psched_tdiff_t; - -#define PSCHED_GET_TIME(stamp) do_gettimeofday(&(stamp)) -#define PSCHED_US2JIFFIE(usecs) usecs_to_jiffies(usecs) -#define PSCHED_JIFFIE2US(delay) jiffies_to_usecs(delay) - -#else /* !CONFIG_NET_SCH_CLK_GETTIMEOFDAY */ - typedef u64 psched_time_t; typedef long psched_tdiff_t; -#ifdef CONFIG_NET_SCH_CLK_JIFFIES +/* Avoid doing 64 bit divide by 1000 */ +#define PSCHED_US2NS(x) ((s64)(x) << 10) +#define PSCHED_NS2US(x) ((x) >> 10) -#if HZ < 96 -#define PSCHED_JSCALE 14 -#elif HZ >= 96 && HZ < 192 -#define PSCHED_JSCALE 13 -#elif HZ >= 192 && HZ < 384 -#define PSCHED_JSCALE 12 -#elif HZ >= 384 && HZ < 768 -#define PSCHED_JSCALE 11 -#elif HZ >= 768 -#define PSCHED_JSCALE 10 -#endif +#define PSCHED_TICKS_PER_SEC PSCHED_NS2US(NSEC_PER_SEC) +#define PSCHED_PASTPERFECT 0 -#define PSCHED_GET_TIME(stamp) ((stamp) = (get_jiffies_64()<>PSCHED_JSCALE) -#define PSCHED_JIFFIE2US(delay) ((delay)< - -extern psched_tdiff_t psched_clock_per_hz; -extern int psched_clock_scale; -extern psched_time_t psched_time_base; -extern cycles_t psched_time_mark; - -#define PSCHED_GET_TIME(stamp) \ -do { \ - cycles_t cur = get_cycles(); \ - if (sizeof(cycles_t) == sizeof(u32)) { \ - if (cur <= psched_time_mark) \ - psched_time_base += 0x100000000ULL; \ - psched_time_mark = cur; \ - (stamp) = (psched_time_base + cur)>>psched_clock_scale; \ - } else { \ - (stamp) = cur>>psched_clock_scale; \ - } \ -} while (0) -#define PSCHED_US2JIFFIE(delay) (((delay)+psched_clock_per_hz-1)/psched_clock_per_hz) -#define PSCHED_JIFFIE2US(delay) ((delay)*psched_clock_per_hz) - -#endif /* CONFIG_NET_SCH_CLK_CPU */ - -#endif /* !CONFIG_NET_SCH_CLK_GETTIMEOFDAY */ - -#ifdef CONFIG_NET_SCH_CLK_GETTIMEOFDAY -#define PSCHED_TDIFF(tv1, tv2) \ -({ \ - int __delta_sec = (tv1).tv_sec - (tv2).tv_sec; \ - int __delta = (tv1).tv_usec - (tv2).tv_usec; \ - if (__delta_sec) { \ - switch (__delta_sec) { \ - default: \ - __delta = 0; \ - case 2: \ - __delta += USEC_PER_SEC; \ - case 1: \ - __delta += USEC_PER_SEC; \ - } \ - } \ - __delta; \ -}) - -static inline int -psched_tod_diff(int delta_sec, int bound) -{ - int delta; - - if (bound <= USEC_PER_SEC || delta_sec > (0x7FFFFFFF/USEC_PER_SEC)-1) - return bound; - delta = delta_sec * USEC_PER_SEC; - if (delta > bound || delta < 0) - delta = bound; - return delta; +static inline psched_time_t psched_get_time(void) +{ + return PSCHED_NS2US(ktime_to_ns(ktime_get())); } -#define PSCHED_TDIFF_SAFE(tv1, tv2, bound) \ -({ \ - int __delta_sec = (tv1).tv_sec - (tv2).tv_sec; \ - int __delta = (tv1).tv_usec - (tv2).tv_usec; \ - switch (__delta_sec) { \ - default: \ - __delta = psched_tod_diff(__delta_sec, bound); break; \ - case 2: \ - __delta += USEC_PER_SEC; \ - case 1: \ - __delta += USEC_PER_SEC; \ - case 0: \ - if (__delta > bound || __delta < 0) \ - __delta = bound; \ - } \ - __delta; \ -}) - -#define PSCHED_TLESS(tv1, tv2) (((tv1).tv_usec < (tv2).tv_usec && \ - (tv1).tv_sec <= (tv2).tv_sec) || \ - (tv1).tv_sec < (tv2).tv_sec) - -#define PSCHED_TADD2(tv, delta, tv_res) \ -({ \ - int __delta = (tv).tv_usec + (delta); \ - (tv_res).tv_sec = (tv).tv_sec; \ - while (__delta >= USEC_PER_SEC) { (tv_res).tv_sec++; __delta -= USEC_PER_SEC; } \ - (tv_res).tv_usec = __delta; \ -}) - -#define PSCHED_TADD(tv, delta) \ -({ \ - (tv).tv_usec += (delta); \ - while ((tv).tv_usec >= USEC_PER_SEC) { (tv).tv_sec++; \ - (tv).tv_usec -= USEC_PER_SEC; } \ -}) - -/* Set/check that time is in the "past perfect"; - it depends on concrete representation of system time - */ - -#define PSCHED_SET_PASTPERFECT(t) ((t).tv_sec = 0) -#define PSCHED_IS_PASTPERFECT(t) ((t).tv_sec == 0) - -#define PSCHED_AUDIT_TDIFF(t) ({ if ((t) > 2000000) (t) = 2000000; }) - -#else /* !CONFIG_NET_SCH_CLK_GETTIMEOFDAY */ - -#define PSCHED_TDIFF(tv1, tv2) (long)((tv1) - (tv2)) -#define PSCHED_TDIFF_SAFE(tv1, tv2, bound) \ - min_t(long long, (tv1) - (tv2), bound) - +static inline psched_tdiff_t +psched_tdiff_bounded(psched_time_t tv1, psched_time_t tv2, psched_time_t bound) +{ + return min(tv1 - tv2, bound); +} -#define PSCHED_TLESS(tv1, tv2) ((tv1) < (tv2)) -#define PSCHED_TADD2(tv, delta, tv_res) ((tv_res) = (tv) + (delta)) -#define PSCHED_TADD(tv, delta) ((tv) += (delta)) -#define PSCHED_SET_PASTPERFECT(t) ((t) = 0) -#define PSCHED_IS_PASTPERFECT(t) ((t) == 0) -#define PSCHED_AUDIT_TDIFF(t) +struct qdisc_watchdog { + struct hrtimer timer; + struct Qdisc *qdisc; +}; -#endif /* !CONFIG_NET_SCH_CLK_GETTIMEOFDAY */ +extern void qdisc_watchdog_init(struct qdisc_watchdog *wd, struct Qdisc *qdisc); +extern void qdisc_watchdog_schedule(struct qdisc_watchdog *wd, + psched_time_t expires); +extern void qdisc_watchdog_cancel(struct qdisc_watchdog *wd); extern struct Qdisc_ops pfifo_qdisc_ops; extern struct Qdisc_ops bfifo_qdisc_ops; diff -puN include/net/red.h~git-net include/net/red.h --- a/include/net/red.h~git-net +++ a/include/net/red.h @@ -151,17 +151,17 @@ static inline void red_set_parms(struct static inline int red_is_idling(struct red_parms *p) { - return !PSCHED_IS_PASTPERFECT(p->qidlestart); + return p->qidlestart != PSCHED_PASTPERFECT; } static inline void red_start_of_idle_period(struct red_parms *p) { - PSCHED_GET_TIME(p->qidlestart); + p->qidlestart = psched_get_time(); } static inline void red_end_of_idle_period(struct red_parms *p) { - PSCHED_SET_PASTPERFECT(p->qidlestart); + p->qidlestart = PSCHED_PASTPERFECT; } static inline void red_restart(struct red_parms *p) @@ -177,8 +177,8 @@ static inline unsigned long red_calc_qav long us_idle; int shift; - PSCHED_GET_TIME(now); - us_idle = PSCHED_TDIFF_SAFE(now, p->qidlestart, p->Scell_max); + now = psched_get_time(); + us_idle = psched_tdiff_bounded(now, p->qidlestart, p->Scell_max); /* * The problem: ideally, average length queue recalcultion should diff -puN /dev/null include/net/rtnetlink.h --- /dev/null +++ a/include/net/rtnetlink.h @@ -0,0 +1,26 @@ +#ifndef __NET_RTNETLINK_H +#define __NET_RTNETLINK_H + +#include +#include + +typedef int (*rtnl_doit_func)(struct sk_buff *, struct nlmsghdr *, void *); +typedef int (*rtnl_dumpit_func)(struct sk_buff *, struct netlink_callback *); + +extern int __rtnl_register(int protocol, int msgtype, + rtnl_doit_func, rtnl_dumpit_func); +extern void rtnl_register(int protocol, int msgtype, + rtnl_doit_func, rtnl_dumpit_func); +extern int rtnl_unregister(int protocol, int msgtype); +extern void rtnl_unregister_all(int protocol); +extern int rtnl_dump_all(struct sk_buff *skb, struct netlink_callback *cb); + +static inline int rtnl_msg_family(struct nlmsghdr *nlh) +{ + if (nlmsg_len(nlh) >= sizeof(struct rtgenmsg)) + return ((struct rtgenmsg *) nlmsg_data(nlh))->rtgen_family; + else + return AF_UNSPEC; +} + +#endif diff -puN include/net/sch_generic.h~git-net include/net/sch_generic.h --- a/include/net/sch_generic.h~git-net +++ a/include/net/sch_generic.h @@ -5,10 +5,10 @@ #include #include #include -#include #include #include #include +#include struct Qdisc_ops; struct qdisc_walker; @@ -177,14 +177,8 @@ extern void qdisc_tree_decrease_qlen(str extern struct Qdisc *qdisc_alloc(struct net_device *dev, struct Qdisc_ops *ops); extern struct Qdisc *qdisc_create_dflt(struct net_device *dev, struct Qdisc_ops *ops, u32 parentid); - -static inline void -tcf_destroy(struct tcf_proto *tp) -{ - tp->ops->destroy(tp); - module_put(tp->ops->owner); - kfree(tp); -} +extern void tcf_destroy(struct tcf_proto *tp); +extern void tcf_destroy_chain(struct tcf_proto *fl); static inline int __qdisc_enqueue_tail(struct sk_buff *skb, struct Qdisc *sch, struct sk_buff_head *list) diff -puN include/net/sctp/constants.h~git-net include/net/sctp/constants.h --- a/include/net/sctp/constants.h~git-net +++ a/include/net/sctp/constants.h @@ -283,7 +283,7 @@ enum { SCTP_MAX_GABS = 16 }; #define SCTP_RTO_BETA 2 /* 1/4 when converted to right shifts. */ /* Maximum number of new data packets that can be sent in a burst. */ -#define SCTP_MAX_BURST 4 +#define SCTP_DEFAULT_MAX_BURST 4 #define SCTP_CLOCK_GRANULARITY 1 /* 1 jiffy */ diff -puN include/net/sctp/structs.h~git-net include/net/sctp/structs.h --- a/include/net/sctp/structs.h~git-net +++ a/include/net/sctp/structs.h @@ -276,6 +276,7 @@ struct sctp_sock { __u32 default_context; __u32 default_timetolive; __u32 default_rcv_context; + int max_burst; /* Heartbeat interval: The endpoint sends out a Heartbeat chunk to * the destination address every heartbeat interval. This value @@ -304,10 +305,12 @@ struct sctp_sock { __u32 autoclose; __u8 nodelay; __u8 disable_fragments; - __u8 pd_mode; __u8 v4mapped; + __u8 frag_interleave; __u32 adaptation_ind; + __u32 pd_point; + atomic_t pd_mode; /* Receive to here while partial delivery is in effect. */ struct sk_buff_head pd_lobby; }; diff -puN include/net/sctp/ulpevent.h~git-net include/net/sctp/ulpevent.h --- a/include/net/sctp/ulpevent.h~git-net +++ a/include/net/sctp/ulpevent.h @@ -89,6 +89,7 @@ struct sctp_ulpevent *sctp_ulpevent_make __u16 error, __u16 outbound, __u16 inbound, + struct sctp_chunk *chunk, gfp_t gfp); struct sctp_ulpevent *sctp_ulpevent_make_peer_addr_change( diff -puN include/net/sctp/ulpqueue.h~git-net include/net/sctp/ulpqueue.h --- a/include/net/sctp/ulpqueue.h~git-net +++ a/include/net/sctp/ulpqueue.h @@ -78,7 +78,7 @@ void sctp_ulpq_partial_delivery(struct s void sctp_ulpq_abort_pd(struct sctp_ulpq *, gfp_t); /* Clear the partial data delivery condition on this socket. */ -int sctp_clear_pd(struct sock *sk); +int sctp_clear_pd(struct sock *sk, struct sctp_association *asoc); /* Skip over an SSN. */ void sctp_ulpq_skip(struct sctp_ulpq *ulpq, __u16 sid, __u16 ssn); diff -puN include/net/sctp/user.h~git-net include/net/sctp/user.h --- a/include/net/sctp/user.h~git-net +++ a/include/net/sctp/user.h @@ -97,6 +97,12 @@ enum sctp_optname { #define SCTP_DELAYED_ACK_TIME SCTP_DELAYED_ACK_TIME SCTP_CONTEXT, /* Receive Context */ #define SCTP_CONTEXT SCTP_CONTEXT + SCTP_FRAGMENT_INTERLEAVE, +#define SCTP_FRAGMENT_INTERLEAVE SCTP_FRAGMENT_INTERLEAVE + SCTP_PARTIAL_DELIVERY_POINT, /* Set/Get partial delivery point */ +#define SCTP_PARTIAL_DELIVERY_POINT SCTP_PARTIAL_DELIVERY_POINT + SCTP_MAX_BURST, /* Set/Get max burst */ +#define SCTP_MAX_BURST SCTP_MAX_BURST /* Internal Socket Options. Some of the sctp library functions are * implemented using these socket options. @@ -213,6 +219,7 @@ struct sctp_assoc_change { __u16 sac_outbound_streams; __u16 sac_inbound_streams; sctp_assoc_t sac_assoc_id; + __u8 sac_info[0]; }; /* @@ -261,6 +268,7 @@ enum sctp_spc_state { SCTP_ADDR_REMOVED, SCTP_ADDR_ADDED, SCTP_ADDR_MADE_PRIM, + SCTP_ADDR_CONFIRMED, }; @@ -508,16 +516,17 @@ struct sctp_setadaptation { * address's parameters: */ enum sctp_spp_flags { - SPP_HB_ENABLE = 1, /*Enable heartbeats*/ - SPP_HB_DISABLE = 2, /*Disable heartbeats*/ + SPP_HB_ENABLE = 1<<0, /*Enable heartbeats*/ + SPP_HB_DISABLE = 1<<1, /*Disable heartbeats*/ SPP_HB = SPP_HB_ENABLE | SPP_HB_DISABLE, - SPP_HB_DEMAND = 4, /*Send heartbeat immediately*/ - SPP_PMTUD_ENABLE = 8, /*Enable PMTU discovery*/ - SPP_PMTUD_DISABLE = 16, /*Disable PMTU discovery*/ + SPP_HB_DEMAND = 1<<2, /*Send heartbeat immediately*/ + SPP_PMTUD_ENABLE = 1<<3, /*Enable PMTU discovery*/ + SPP_PMTUD_DISABLE = 1<<4, /*Disable PMTU discovery*/ SPP_PMTUD = SPP_PMTUD_ENABLE | SPP_PMTUD_DISABLE, - SPP_SACKDELAY_ENABLE = 32, /*Enable SACK*/ - SPP_SACKDELAY_DISABLE = 64, /*Disable SACK*/ + SPP_SACKDELAY_ENABLE = 1<<5, /*Enable SACK*/ + SPP_SACKDELAY_DISABLE = 1<<6, /*Disable SACK*/ SPP_SACKDELAY = SPP_SACKDELAY_ENABLE | SPP_SACKDELAY_DISABLE, + SPP_HB_TIME_IS_ZERO = 1<<7, /* Set HB delay to 0 */ }; struct sctp_paddrparams { @@ -530,7 +539,7 @@ struct sctp_paddrparams { __u32 spp_flags; } __attribute__((packed, aligned(4))); -/* 7.1.24. Delayed Ack Timer (SCTP_DELAYED_ACK_TIME) +/* 7.1.23. Delayed Ack Timer (SCTP_DELAYED_ACK_TIME) * * This options will get or set the delayed ack timer. The time is set * in milliseconds. If the assoc_id is 0, then this sets or gets the diff -puN include/net/sock.h~git-net include/net/sock.h --- a/include/net/sock.h~git-net +++ a/include/net/sock.h @@ -202,6 +202,15 @@ struct sock { unsigned short sk_type; int sk_rcvbuf; socket_lock_t sk_lock; + /* + * The backlog queue is special, it is always used with + * the per-socket spinlock held and requires low latency + * access. Therefore we special case it's implementation. + */ + struct { + struct sk_buff *head; + struct sk_buff *tail; + } sk_backlog; wait_queue_head_t *sk_sleep; struct dst_entry *sk_dst_cache; struct xfrm_policy *sk_policy[2]; @@ -221,15 +230,6 @@ struct sock { int sk_rcvlowat; unsigned long sk_flags; unsigned long sk_lingertime; - /* - * The backlog queue is special, it is always used with - * the per-socket spinlock held and requires low latency - * access. Therefore we special case it's implementation. - */ - struct { - struct sk_buff *head; - struct sk_buff *tail; - } sk_backlog; struct sk_buff_head sk_error_queue; struct proto *sk_prot_creator; rwlock_t sk_callback_lock; @@ -244,7 +244,7 @@ struct sock { struct sk_filter *sk_filter; void *sk_protinfo; struct timer_list sk_timer; - struct timeval sk_stamp; + ktime_t sk_stamp; struct socket *sk_socket; void *sk_user_data; struct page *sk_sndmsg_page; @@ -390,6 +390,7 @@ enum sock_flags { SOCK_USE_WRITE_QUEUE, /* whether to call sk->sk_write_space in sock_wfree */ SOCK_DBG, /* %SO_DEBUG setting */ SOCK_RCVTSTAMP, /* %SO_TIMESTAMP setting */ + SOCK_RCVTSTAMPNS, /* %SO_TIMESTAMPNS setting */ SOCK_LOCALROUTE, /* route locally only, %SO_DONTROUTE setting */ SOCK_QUEUE_SHRUNK, /* write queue has been shrunk recently */ }; @@ -710,15 +711,6 @@ static inline void sk_stream_mem_reclaim __sk_stream_mem_reclaim(sk); } -static inline void sk_stream_writequeue_purge(struct sock *sk) -{ - struct sk_buff *skb; - - while ((skb = __skb_dequeue(&sk->sk_write_queue)) != NULL) - sk_stream_free_skb(sk, skb); - sk_stream_mem_reclaim(sk); -} - static inline int sk_stream_rmem_schedule(struct sock *sk, struct sk_buff *skb) { return (int)skb->truesize <= sk->sk_forward_alloc || @@ -1083,19 +1075,7 @@ static inline int sk_can_gso(const struc return net_gso_ok(sk->sk_route_caps, sk->sk_gso_type); } -static inline void sk_setup_caps(struct sock *sk, struct dst_entry *dst) -{ - __sk_dst_set(sk, dst); - sk->sk_route_caps = dst->dev->features; - if (sk->sk_route_caps & NETIF_F_GSO) - sk->sk_route_caps |= NETIF_F_GSO_MASK; - if (sk_can_gso(sk)) { - if (dst->header_len) - sk->sk_route_caps &= ~NETIF_F_GSO_MASK; - else - sk->sk_route_caps |= NETIF_F_SG | NETIF_F_HW_CSUM; - } -} +extern void sk_setup_caps(struct sock *sk, struct dst_entry *dst); static inline void sk_charge_skb(struct sock *sk, struct sk_buff *skb) { @@ -1256,18 +1236,6 @@ static inline struct page *sk_stream_all return page; } -#define sk_stream_for_retrans_queue(skb, sk) \ - for (skb = (sk)->sk_write_queue.next; \ - (skb != (sk)->sk_send_head) && \ - (skb != (struct sk_buff *)&(sk)->sk_write_queue); \ - skb = skb->next) - -/*from STCP for fast SACK Process*/ -#define sk_stream_for_retrans_queue_from(skb, sk) \ - for (; (skb != (sk)->sk_send_head) && \ - (skb != (struct sk_buff *)&(sk)->sk_write_queue); \ - skb = skb->next) - /* * Default write policy as shown to user space via poll/select/SIGIO */ @@ -1304,22 +1272,18 @@ static inline int sock_intr_errno(long t return timeo == MAX_SCHEDULE_TIMEOUT ? -ERESTARTSYS : -EINTR; } +extern void __sock_recv_timestamp(struct msghdr *msg, struct sock *sk, + struct sk_buff *skb); + static __inline__ void sock_recv_timestamp(struct msghdr *msg, struct sock *sk, struct sk_buff *skb) { - struct timeval stamp; + ktime_t kt = skb->tstamp; - skb_get_timestamp(skb, &stamp); - if (sock_flag(sk, SOCK_RCVTSTAMP)) { - /* Race occurred between timestamp enabling and packet - receiving. Fill in the current time for now. */ - if (stamp.tv_sec == 0) - do_gettimeofday(&stamp); - skb_set_timestamp(skb, &stamp); - put_cmsg(msg, SOL_SOCKET, SO_TIMESTAMP, sizeof(struct timeval), - &stamp); - } else - sk->sk_stamp = stamp; + if (sock_flag(sk, SOCK_RCVTSTAMP)) + __sock_recv_timestamp(msg, sk, skb); + else + sk->sk_stamp = kt; } /** @@ -1350,18 +1314,17 @@ static inline void sk_eat_skb(struct soc extern void sock_enable_timestamp(struct sock *sk); extern int sock_get_timestamp(struct sock *, struct timeval __user *); +extern int sock_get_timestampns(struct sock *, struct timespec __user *); /* * Enable debug/info messages */ +extern int net_msg_warn; +#define NETDEBUG(fmt, args...) \ + do { if (net_msg_warn) printk(fmt,##args); } while (0) -#ifdef CONFIG_NETDEBUG -#define NETDEBUG(fmt, args...) printk(fmt,##args) -#define LIMIT_NETDEBUG(fmt, args...) do { if (net_ratelimit()) printk(fmt,##args); } while(0) -#else -#define NETDEBUG(fmt, args...) do { } while (0) -#define LIMIT_NETDEBUG(fmt, args...) do { } while(0) -#endif +#define LIMIT_NETDEBUG(fmt, args...) \ + do { if (net_msg_warn && net_ratelimit()) printk(fmt,##args); } while(0) /* * Macros for sleeping on a socket. Use them like this: diff -puN include/net/tcp.h~git-net include/net/tcp.h --- a/include/net/tcp.h~git-net +++ a/include/net/tcp.h @@ -220,6 +220,7 @@ extern int sysctl_tcp_app_win; extern int sysctl_tcp_adv_win_scale; extern int sysctl_tcp_tw_reuse; extern int sysctl_tcp_frto; +extern int sysctl_tcp_frto_response; extern int sysctl_tcp_low_latency; extern int sysctl_tcp_dma_copybreak; extern int sysctl_tcp_nometrics_save; @@ -230,6 +231,7 @@ extern int sysctl_tcp_mtu_probing; extern int sysctl_tcp_base_mss; extern int sysctl_tcp_workaround_signed_windows; extern int sysctl_tcp_slow_start_after_idle; +extern int sysctl_tcp_max_ssthresh; extern atomic_t tcp_memory_allocated; extern atomic_t tcp_sockets_allocated; @@ -341,6 +343,7 @@ extern struct sock * tcp_check_req(stru extern int tcp_child_process(struct sock *parent, struct sock *child, struct sk_buff *skb); +extern int tcp_use_frto(struct sock *sk); extern void tcp_enter_frto(struct sock *sk); extern void tcp_enter_loss(struct sock *sk, int how); extern void tcp_clear_retrans(struct tcp_sock *tp); @@ -417,9 +420,9 @@ extern __u32 cookie_v4_init_sequence(str /* tcp_output.c */ -extern void __tcp_push_pending_frames(struct sock *sk, struct tcp_sock *tp, - unsigned int cur_mss, int nonagle); -extern int tcp_may_send_now(struct sock *sk, struct tcp_sock *tp); +extern void __tcp_push_pending_frames(struct sock *sk, unsigned int cur_mss, + int nonagle); +extern int tcp_may_send_now(struct sock *sk); extern int tcp_retransmit_skb(struct sock *, struct sk_buff *); extern void tcp_xmit_retransmit_queue(struct sock *); extern void tcp_simple_retransmit(struct sock *); @@ -476,8 +479,10 @@ static inline void tcp_fast_path_on(stru __tcp_fast_path_on(tp, tp->snd_wnd >> tp->rx_opt.snd_wscale); } -static inline void tcp_fast_path_check(struct sock *sk, struct tcp_sock *tp) +static inline void tcp_fast_path_check(struct sock *sk) { + struct tcp_sock *tp = tcp_sk(sk); + if (skb_queue_empty(&tp->out_of_order_queue) && tp->rcv_wnd && atomic_read(&sk->sk_rmem_alloc) < sk->sk_rcvbuf && @@ -588,10 +593,10 @@ static inline void tcp_dec_pcount_approx } } -static inline void tcp_packets_out_inc(struct sock *sk, - struct tcp_sock *tp, +static inline void tcp_packets_out_inc(struct sock *sk, const struct sk_buff *skb) { + struct tcp_sock *tp = tcp_sk(sk); int orig = tp->packets_out; tp->packets_out += tcp_skb_pcount(skb); @@ -736,7 +741,7 @@ static inline void tcp_sync_left_out(str tp->left_out = tp->sacked_out + tp->lost_out; } -extern void tcp_enter_cwr(struct sock *sk); +extern void tcp_enter_cwr(struct sock *sk, const int set_ssthresh); extern __u32 tcp_init_cwnd(struct tcp_sock *tp, struct dst_entry *dst); /* Slow start with delack produces 3 packets of burst, so that @@ -775,18 +780,21 @@ static inline void tcp_minshall_update(s tp->snd_sml = TCP_SKB_CB(skb)->end_seq; } -static inline void tcp_check_probe_timer(struct sock *sk, struct tcp_sock *tp) +static inline void tcp_check_probe_timer(struct sock *sk) { + struct tcp_sock *tp = tcp_sk(sk); const struct inet_connection_sock *icsk = inet_csk(sk); + if (!tp->packets_out && !icsk->icsk_pending) inet_csk_reset_xmit_timer(sk, ICSK_TIME_PROBE0, icsk->icsk_rto, TCP_RTO_MAX); } -static inline void tcp_push_pending_frames(struct sock *sk, - struct tcp_sock *tp) +static inline void tcp_push_pending_frames(struct sock *sk) { - __tcp_push_pending_frames(sk, tp, tcp_current_mss(sk, 1), tp->nonagle); + struct tcp_sock *tp = tcp_sk(sk); + + __tcp_push_pending_frames(sk, tcp_current_mss(sk, 1), tp->nonagle); } static inline void tcp_init_wl(struct tcp_sock *tp, u32 ack, u32 seq) @@ -815,7 +823,7 @@ static inline __sum16 __tcp_checksum_com static inline int tcp_checksum_complete(struct sk_buff *skb) { - return skb->ip_summed != CHECKSUM_UNNECESSARY && + return !skb_csum_unnecessary(skb) && __tcp_checksum_complete(skb); } @@ -918,21 +926,7 @@ static inline void tcp_set_state(struct #endif } -static inline void tcp_done(struct sock *sk) -{ - if(sk->sk_state == TCP_SYN_SENT || sk->sk_state == TCP_SYN_RECV) - TCP_INC_STATS_BH(TCP_MIB_ATTEMPTFAILS); - - tcp_set_state(sk, TCP_CLOSE); - tcp_clear_xmit_timers(sk); - - sk->sk_shutdown = SHUTDOWN_MASK; - - if (!sock_flag(sk, SOCK_DEAD)) - sk->sk_state_change(sk); - else - inet_csk_destroy_sock(sk); -} +extern void tcp_done(struct sock *sk); static inline void tcp_sack_reset(struct tcp_options_received *rx_opt) { @@ -981,7 +975,7 @@ static inline void tcp_openreq_init(stru ireq->wscale_ok = rx_opt->wscale_ok; ireq->acked = 0; ireq->ecn_ok = 0; - ireq->rmt_port = skb->h.th->source; + ireq->rmt_port = tcp_hdr(skb)->source; } extern void tcp_enter_memory_pressure(void); @@ -1011,7 +1005,7 @@ static inline int tcp_paws_check(const s { if ((s32)(rx_opt->rcv_tsval - rx_opt->ts_recent) >= 0) return 0; - if (xtime.tv_sec >= rx_opt->ts_recent_stamp + TCP_PAWS_24DAYS) + if (get_seconds() >= rx_opt->ts_recent_stamp + TCP_PAWS_24DAYS) return 0; /* RST segments are not recommended to carry timestamp, @@ -1026,26 +1020,13 @@ static inline int tcp_paws_check(const s However, we can relax time bounds for RST segments to MSL. */ - if (rst && xtime.tv_sec >= rx_opt->ts_recent_stamp + TCP_PAWS_MSL) + if (rst && get_seconds() >= rx_opt->ts_recent_stamp + TCP_PAWS_MSL) return 0; return 1; } #define TCP_CHECK_TIMER(sk) do { } while (0) -static inline int tcp_use_frto(const struct sock *sk) -{ - const struct tcp_sock *tp = tcp_sk(sk); - - /* F-RTO must be activated in sysctl and there must be some - * unsent new data, and the advertised window should allow - * sending it. - */ - return (sysctl_tcp_frto && sk->sk_send_head && - !after(TCP_SKB_CB(sk->sk_send_head)->end_seq, - tp->snd_una + tp->snd_wnd)); -} - static inline void tcp_mib_init(void) { /* See RFC 2012 */ @@ -1172,6 +1153,120 @@ static inline void tcp_put_md5sig_pool( put_cpu(); } +/* write queue abstraction */ +static inline void tcp_write_queue_purge(struct sock *sk) +{ + struct sk_buff *skb; + + while ((skb = __skb_dequeue(&sk->sk_write_queue)) != NULL) + sk_stream_free_skb(sk, skb); + sk_stream_mem_reclaim(sk); +} + +static inline struct sk_buff *tcp_write_queue_head(struct sock *sk) +{ + struct sk_buff *skb = sk->sk_write_queue.next; + if (skb == (struct sk_buff *) &sk->sk_write_queue) + return NULL; + return skb; +} + +static inline struct sk_buff *tcp_write_queue_tail(struct sock *sk) +{ + struct sk_buff *skb = sk->sk_write_queue.prev; + if (skb == (struct sk_buff *) &sk->sk_write_queue) + return NULL; + return skb; +} + +static inline struct sk_buff *tcp_write_queue_next(struct sock *sk, struct sk_buff *skb) +{ + return skb->next; +} + +#define tcp_for_write_queue(skb, sk) \ + for (skb = (sk)->sk_write_queue.next; \ + (skb != (struct sk_buff *)&(sk)->sk_write_queue); \ + skb = skb->next) + +#define tcp_for_write_queue_from(skb, sk) \ + for (; (skb != (struct sk_buff *)&(sk)->sk_write_queue);\ + skb = skb->next) + +static inline struct sk_buff *tcp_send_head(struct sock *sk) +{ + return sk->sk_send_head; +} + +static inline void tcp_advance_send_head(struct sock *sk, struct sk_buff *skb) +{ + sk->sk_send_head = skb->next; + if (sk->sk_send_head == (struct sk_buff *)&sk->sk_write_queue) + sk->sk_send_head = NULL; +} + +static inline void tcp_check_send_head(struct sock *sk, struct sk_buff *skb_unlinked) +{ + if (sk->sk_send_head == skb_unlinked) + sk->sk_send_head = NULL; +} + +static inline void tcp_init_send_head(struct sock *sk) +{ + sk->sk_send_head = NULL; +} + +static inline void __tcp_add_write_queue_tail(struct sock *sk, struct sk_buff *skb) +{ + __skb_queue_tail(&sk->sk_write_queue, skb); +} + +static inline void tcp_add_write_queue_tail(struct sock *sk, struct sk_buff *skb) +{ + __tcp_add_write_queue_tail(sk, skb); + + /* Queue it, remembering where we must start sending. */ + if (sk->sk_send_head == NULL) + sk->sk_send_head = skb; +} + +static inline void __tcp_add_write_queue_head(struct sock *sk, struct sk_buff *skb) +{ + __skb_queue_head(&sk->sk_write_queue, skb); +} + +/* Insert buff after skb on the write queue of sk. */ +static inline void tcp_insert_write_queue_after(struct sk_buff *skb, + struct sk_buff *buff, + struct sock *sk) +{ + __skb_append(skb, buff, &sk->sk_write_queue); +} + +/* Insert skb between prev and next on the write queue of sk. */ +static inline void tcp_insert_write_queue_before(struct sk_buff *new, + struct sk_buff *skb, + struct sock *sk) +{ + __skb_insert(new, skb->prev, skb, &sk->sk_write_queue); +} + +static inline void tcp_unlink_write_queue(struct sk_buff *skb, struct sock *sk) +{ + __skb_unlink(skb, &sk->sk_write_queue); +} + +static inline int tcp_skb_is_last(const struct sock *sk, + const struct sk_buff *skb) +{ + return skb->next == (struct sk_buff *)&sk->sk_write_queue; +} + +static inline int tcp_write_queue_empty(struct sock *sk) +{ + return skb_queue_empty(&sk->sk_write_queue); +} + /* /proc */ enum tcp_seq_states { TCP_SEQ_STATE_LISTENING, diff -puN include/net/tcp_ecn.h~git-net include/net/tcp_ecn.h --- a/include/net/tcp_ecn.h~git-net +++ a/include/net/tcp_ecn.h @@ -27,9 +27,10 @@ static inline void TCP_ECN_send_synack(s TCP_SKB_CB(skb)->flags &= ~TCPCB_FLAG_ECE; } -static inline void TCP_ECN_send_syn(struct sock *sk, struct tcp_sock *tp, - struct sk_buff *skb) +static inline void TCP_ECN_send_syn(struct sock *sk, struct sk_buff *skb) { + struct tcp_sock *tp = tcp_sk(sk); + tp->ecn_flags = 0; if (sysctl_tcp_ecn) { TCP_SKB_CB(skb)->flags |= TCPCB_FLAG_ECE|TCPCB_FLAG_CWR; @@ -44,9 +45,11 @@ TCP_ECN_make_synack(struct request_sock th->ece = 1; } -static inline void TCP_ECN_send(struct sock *sk, struct tcp_sock *tp, - struct sk_buff *skb, int tcp_header_len) +static inline void TCP_ECN_send(struct sock *sk, struct sk_buff *skb, + int tcp_header_len) { + struct tcp_sock *tp = tcp_sk(sk); + if (tp->ecn_flags & TCP_ECN_OK) { /* Not-retransmitted data segment: set ECT and inject CWR. */ if (skb->len != tcp_header_len && @@ -54,7 +57,7 @@ static inline void TCP_ECN_send(struct s INET_ECN_xmit(sk); if (tp->ecn_flags&TCP_ECN_QUEUE_CWR) { tp->ecn_flags &= ~TCP_ECN_QUEUE_CWR; - skb->h.th->cwr = 1; + tcp_hdr(skb)->cwr = 1; skb_shinfo(skb)->gso_type |= SKB_GSO_TCP_ECN; } } else { @@ -62,7 +65,7 @@ static inline void TCP_ECN_send(struct s INET_ECN_dontxmit(sk); } if (tp->ecn_flags & TCP_ECN_DEMAND_CWR) - skb->h.th->ece = 1; + tcp_hdr(skb)->ece = 1; } } @@ -70,7 +73,7 @@ static inline void TCP_ECN_send(struct s static inline void TCP_ECN_accept_cwr(struct tcp_sock *tp, struct sk_buff *skb) { - if (skb->h.th->cwr) + if (tcp_hdr(skb)->cwr) tp->ecn_flags &= ~TCP_ECN_DEMAND_CWR; } diff -puN include/net/udp.h~git-net include/net/udp.h --- a/include/net/udp.h~git-net +++ a/include/net/udp.h @@ -72,15 +72,12 @@ struct sk_buff; */ static inline __sum16 __udp_lib_checksum_complete(struct sk_buff *skb) { - if (! UDP_SKB_CB(skb)->partial_cov) - return __skb_checksum_complete(skb); - return csum_fold(skb_checksum(skb, 0, UDP_SKB_CB(skb)->cscov, - skb->csum)); + return __skb_checksum_complete_head(skb, UDP_SKB_CB(skb)->cscov); } static inline int udp_lib_checksum_complete(struct sk_buff *skb) { - return skb->ip_summed != CHECKSUM_UNNECESSARY && + return !skb_csum_unnecessary(skb) && __udp_lib_checksum_complete(skb); } @@ -92,8 +89,8 @@ static inline int udp_lib_checksum_compl */ static inline __wsum udp_csum_outgoing(struct sock *sk, struct sk_buff *skb) { - __wsum csum = csum_partial(skb->h.raw, sizeof(struct udphdr), 0); - + __wsum csum = csum_partial(skb_transport_header(skb), + sizeof(struct udphdr), 0); skb_queue_walk(&sk->sk_write_queue, skb) { csum = csum_add(csum, skb->csum); } diff -puN include/net/udplite.h~git-net include/net/udplite.h --- a/include/net/udplite.h~git-net +++ a/include/net/udplite.h @@ -47,11 +47,10 @@ static inline int udplite_checksum_init( return 1; } - UDP_SKB_CB(skb)->partial_cov = 0; cscov = ntohs(uh->len); if (cscov == 0) /* Indicates that full coverage is required. */ - cscov = skb->len; + ; else if (cscov < 8 || cscov > skb->len) { /* * Coverage length violates RFC 3828: log and discard silently. @@ -60,42 +59,16 @@ static inline int udplite_checksum_init( cscov, skb->len); return 1; - } else if (cscov < skb->len) + } else if (cscov < skb->len) { UDP_SKB_CB(skb)->partial_cov = 1; - - UDP_SKB_CB(skb)->cscov = cscov; - - /* - * There is no known NIC manufacturer supporting UDP-Lite yet, - * hence ip_summed is always (re-)set to CHECKSUM_NONE. - */ - skb->ip_summed = CHECKSUM_NONE; + UDP_SKB_CB(skb)->cscov = cscov; + if (skb->ip_summed == CHECKSUM_COMPLETE) + skb->ip_summed = CHECKSUM_NONE; + } return 0; } -static __inline__ int udplite4_csum_init(struct sk_buff *skb, struct udphdr *uh) -{ - int rc = udplite_checksum_init(skb, uh); - - if (!rc) - skb->csum = csum_tcpudp_nofold(skb->nh.iph->saddr, - skb->nh.iph->daddr, - skb->len, IPPROTO_UDPLITE, 0); - return rc; -} - -static __inline__ int udplite6_csum_init(struct sk_buff *skb, struct udphdr *uh) -{ - int rc = udplite_checksum_init(skb, uh); - - if (!rc) - skb->csum = ~csum_unfold(csum_ipv6_magic(&skb->nh.ipv6h->saddr, - &skb->nh.ipv6h->daddr, - skb->len, IPPROTO_UDPLITE, 0)); - return rc; -} - static inline int udplite_sender_cscov(struct udp_sock *up, struct udphdr *uh) { int cscov = up->len; @@ -128,14 +101,14 @@ static inline int udplite_sender_cscov(s static inline __wsum udplite_csum_outgoing(struct sock *sk, struct sk_buff *skb) { - int off, len, cscov = udplite_sender_cscov(udp_sk(sk), skb->h.uh); + int cscov = udplite_sender_cscov(udp_sk(sk), udp_hdr(skb)); __wsum csum = 0; skb->ip_summed = CHECKSUM_NONE; /* no HW support for checksumming */ skb_queue_walk(&sk->sk_write_queue, skb) { - off = skb->h.raw - skb->data; - len = skb->len - off; + const int off = skb_transport_offset(skb); + const int len = skb->len - off; csum = skb_checksum(skb, off, (cscov > len)? len : cscov, csum); diff -puN /dev/null include/net/wireless.h --- /dev/null +++ a/include/net/wireless.h @@ -0,0 +1,139 @@ +#ifndef __NET_WIRELESS_H +#define __NET_WIRELESS_H + +/* + * 802.11 device management + * + * Copyright 2007 Johannes Berg + */ + +#include +#include +#include +#include + +/** + * struct wiphy - wireless hardware description + * @idx: the wiphy index assigned to this item + * @class_dev: the class device representing /sys/class/ieee80211/ + */ +struct wiphy { + /* assign these fields before you register the wiphy */ + + /* permanent MAC address */ + u8 perm_addr[ETH_ALEN]; + + /* If multiple wiphys are registered and you're handed e.g. + * a regular netdev with assigned ieee80211_ptr, you won't + * know whether it points to a wiphy your driver has registered + * or not. Assign this to something global to your driver to + * help determine whether you own this wiphy or not. */ + void *privid; + + /* fields below are read-only, assigned by cfg80211 */ + + /* the item in /sys/class/ieee80211/ points to this, + * you need use set_wiphy_dev() (see below) */ + struct device dev; + + /* dir in debugfs: ieee80211/ */ + struct dentry *debugfsdir; + + char priv[0] __attribute__((__aligned__(NETDEV_ALIGN))); +}; + +/** struct wireless_dev - wireless per-netdev state + * + * This structure must be allocated by the driver/stack + * that uses the ieee80211_ptr field in struct net_device + * (this is intentional so it can be allocated along with + * the netdev.) + * + * @wiphy: pointer to hardware description + */ +struct wireless_dev { + struct wiphy *wiphy; + + /* private to the generic wireless code */ + struct list_head list; + struct net_device *netdev; +}; + +/** + * wiphy_priv - return priv from wiphy + */ +static inline void *wiphy_priv(struct wiphy *wiphy) +{ + BUG_ON(!wiphy); + return &wiphy->priv; +} + +/** + * set_wiphy_dev - set device pointer for wiphy + */ +static inline void set_wiphy_dev(struct wiphy *wiphy, struct device *dev) +{ + wiphy->dev.parent = dev; +} + +/** + * wiphy_dev - get wiphy dev pointer + */ +static inline struct device *wiphy_dev(struct wiphy *wiphy) +{ + return wiphy->dev.parent; +} + +/** + * wiphy_name - get wiphy name + */ +static inline char *wiphy_name(struct wiphy *wiphy) +{ + return wiphy->dev.bus_id; +} + +/** + * wdev_priv - return wiphy priv from wireless_dev + */ +static inline void *wdev_priv(struct wireless_dev *wdev) +{ + BUG_ON(!wdev); + return wiphy_priv(wdev->wiphy); +} + +/** + * wiphy_new - create a new wiphy for use with cfg80211 + * + * create a new wiphy and associate the given operations with it. + * @sizeof_priv bytes are allocated for private use. + * + * the returned pointer must be assigned to each netdev's + * ieee80211_ptr for proper operation. + */ +struct wiphy *wiphy_new(struct cfg80211_ops *ops, int sizeof_priv); + +/** + * wiphy_register - register a wiphy with cfg80211 + * + * register the given wiphy + * + * Returns a non-negative wiphy index or a negative error code. + */ +extern int wiphy_register(struct wiphy *wiphy); + +/** + * wiphy_unregister - deregister a wiphy from cfg80211 + * + * unregister a device with the given priv pointer. + * After this call, no more requests can be made with this priv + * pointer, but the call may sleep to wait for an outstanding + * request that is being handled. + */ +extern void wiphy_unregister(struct wiphy *wiphy); + +/** + * wiphy_free - free wiphy + */ +extern void wiphy_free(struct wiphy *wiphy); + +#endif /* __NET_WIRELESS_H */ diff -puN include/net/x25device.h~git-net include/net/x25device.h --- a/include/net/x25device.h~git-net +++ a/include/net/x25device.h @@ -7,8 +7,8 @@ static inline __be16 x25_type_trans(struct sk_buff *skb, struct net_device *dev) { - skb->mac.raw = skb->data; skb->dev = dev; + skb_reset_mac_header(skb); skb->pkt_type = PACKET_HOST; return htons(ETH_P_X25); diff -puN include/net/xfrm.h~git-net include/net/xfrm.h --- a/include/net/xfrm.h~git-net +++ a/include/net/xfrm.h @@ -279,7 +279,7 @@ struct xfrm_type xfrm_address_t *(*local_addr)(struct xfrm_state *, xfrm_address_t *); xfrm_address_t *(*remote_addr)(struct xfrm_state *, xfrm_address_t *); /* Estimate maximal size of result of transformation of a dgram */ - u32 (*get_max_size)(struct xfrm_state *, int size); + u32 (*get_mtu)(struct xfrm_state *, int size); }; extern int xfrm_register_type(struct xfrm_type *type, unsigned short family); diff -puN kernel/audit.c~git-net kernel/audit.c --- a/kernel/audit.c~git-net +++ a/kernel/audit.c @@ -151,7 +151,7 @@ struct audit_buffer { static void audit_set_pid(struct audit_buffer *ab, pid_t pid) { - struct nlmsghdr *nlh = (struct nlmsghdr *)ab->skb->data; + struct nlmsghdr *nlh = nlmsg_hdr(ab->skb); nlh->nlmsg_pid = pid; } @@ -750,7 +750,7 @@ static void audit_receive_skb(struct sk_ u32 rlen; while (skb->len >= NLMSG_SPACE(0)) { - nlh = (struct nlmsghdr *)skb->data; + nlh = nlmsg_hdr(skb); if (nlh->nlmsg_len < sizeof(*nlh) || skb->len < nlh->nlmsg_len) return; rlen = NLMSG_ALIGN(nlh->nlmsg_len); @@ -795,7 +795,7 @@ static int __init audit_init(void) printk(KERN_INFO "audit: initializing netlink socket (%s)\n", audit_default ? "enabled" : "disabled"); audit_sock = netlink_kernel_create(NETLINK_AUDIT, 0, audit_receive, - THIS_MODULE); + NULL, THIS_MODULE); if (!audit_sock) audit_panic("cannot initialize netlink socket"); else @@ -1073,7 +1073,7 @@ static void audit_log_vformat(struct aud goto out; } va_copy(args2, args); - len = vsnprintf(skb->tail, avail, fmt, args); + len = vsnprintf(skb_tail_pointer(skb), avail, fmt, args); if (len >= avail) { /* The printk buffer is 1024 bytes long, so if we get * here and AUDIT_BUFSIZ is at least 1024, then we can @@ -1082,7 +1082,7 @@ static void audit_log_vformat(struct aud max_t(unsigned, AUDIT_BUFSIZ, 1+len-avail)); if (!avail) goto out; - len = vsnprintf(skb->tail, avail, fmt, args2); + len = vsnprintf(skb_tail_pointer(skb), avail, fmt, args2); } if (len > 0) skb_put(skb, len); @@ -1143,7 +1143,7 @@ void audit_log_hex(struct audit_buffer * return; } - ptr = skb->tail; + ptr = skb_tail_pointer(skb); for (i=0; i>4]; /* Upper nibble */ *ptr++ = hex[buf[i] & 0x0F]; /* Lower nibble */ @@ -1175,7 +1175,7 @@ static void audit_log_n_string(struct au if (!avail) return; } - ptr = skb->tail; + ptr = skb_tail_pointer(skb); *ptr++ = '"'; memcpy(ptr, string, slen); ptr += slen; @@ -1268,7 +1268,7 @@ void audit_log_end(struct audit_buffer * audit_log_lost("rate limit exceeded"); } else { if (audit_pid) { - struct nlmsghdr *nlh = (struct nlmsghdr *)ab->skb->data; + struct nlmsghdr *nlh = nlmsg_hdr(ab->skb); nlh->nlmsg_len = ab->skb->len - NLMSG_SPACE(0); skb_queue_tail(&audit_skb_queue, ab->skb); ab->skb = NULL; diff -puN kernel/hrtimer.c~git-net kernel/hrtimer.c --- a/kernel/hrtimer.c~git-net +++ a/kernel/hrtimer.c @@ -59,6 +59,7 @@ ktime_t ktime_get(void) return timespec_to_ktime(now); } +EXPORT_SYMBOL_GPL(ktime_get); /** * ktime_get_real - get the real (wall-) time in ktime_t format diff -puN kernel/taskstats.c~git-net kernel/taskstats.c --- a/kernel/taskstats.c~git-net +++ a/kernel/taskstats.c @@ -102,7 +102,7 @@ static int prepare_reply(struct genl_inf */ static int send_reply(struct sk_buff *skb, pid_t pid) { - struct genlmsghdr *genlhdr = nlmsg_data((struct nlmsghdr *)skb->data); + struct genlmsghdr *genlhdr = nlmsg_data(nlmsg_hdr(skb)); void *reply = genlmsg_data(genlhdr); int rc; @@ -121,7 +121,7 @@ static int send_reply(struct sk_buff *sk static void send_cpu_listeners(struct sk_buff *skb, struct listener_list *listeners) { - struct genlmsghdr *genlhdr = nlmsg_data((struct nlmsghdr *)skb->data); + struct genlmsghdr *genlhdr = nlmsg_data(nlmsg_hdr(skb)); struct listener *s, *tmp; struct sk_buff *skb_next, *skb_cur = skb; void *reply = genlmsg_data(genlhdr); diff -puN kernel/time.c~git-net kernel/time.c --- a/kernel/time.c~git-net +++ a/kernel/time.c @@ -452,6 +452,7 @@ struct timespec ns_to_timespec(const s64 return ts; } +EXPORT_SYMBOL(ns_to_timespec); /** * ns_to_timeval - Convert nanoseconds to timeval @@ -469,6 +470,7 @@ struct timeval ns_to_timeval(const s64 n return tv; } +EXPORT_SYMBOL(ns_to_timeval); /* * Convert jiffies to milliseconds and back. diff -puN lib/Makefile~git-net lib/Makefile --- a/lib/Makefile~git-net +++ a/lib/Makefile @@ -4,7 +4,7 @@ lib-y := ctype.o string.o vsprintf.o cmdline.o \ rbtree.o radix-tree.o dump_stack.o \ - idr.o div64.o int_sqrt.o bitmap.o extable.o prio_tree.o \ + idr.o int_sqrt.o bitmap.o extable.o prio_tree.o \ sha1.o irq_regs.o reciprocal_div.o lib-$(CONFIG_MMU) += ioremap.o @@ -12,7 +12,8 @@ lib-$(CONFIG_SMP) += cpumask.o lib-y += kobject.o kref.o kobject_uevent.o klist.o -obj-y += sort.o parser.o halfmd4.o debug_locks.o random32.o bust_spinlocks.o +obj-y += div64.o sort.o parser.o halfmd4.o debug_locks.o random32.o \ + bust_spinlocks.o ifeq ($(CONFIG_DEBUG_KOBJECT),y) CFLAGS_kobject.o += -DDEBUG diff -puN lib/div64.c~git-net lib/div64.c --- a/lib/div64.c~git-net +++ a/lib/div64.c @@ -23,7 +23,7 @@ /* Not needed on 64bit architectures */ #if BITS_PER_LONG == 32 -uint32_t __div64_32(uint64_t *n, uint32_t base) +uint32_t __attribute__((weak)) __div64_32(uint64_t *n, uint32_t base) { uint64_t rem = *n; uint64_t b = base; @@ -58,4 +58,24 @@ uint32_t __div64_32(uint64_t *n, uint32_ EXPORT_SYMBOL(__div64_32); +/* 64bit divisor, dividend and result. dynamic precision */ +uint64_t div64_64(uint64_t dividend, uint64_t divisor) +{ + uint32_t high, d; + + high = divisor >> 32; + if (high) { + unsigned int shift = fls(high); + + d = divisor >> shift; + dividend >>= shift; + } else + d = divisor; + + do_div(dividend, d); + + return dividend; +} +EXPORT_SYMBOL(div64_64); + #endif /* BITS_PER_LONG == 32 */ diff -puN lib/kobject_uevent.c~git-net lib/kobject_uevent.c --- a/lib/kobject_uevent.c~git-net +++ a/lib/kobject_uevent.c @@ -291,7 +291,7 @@ EXPORT_SYMBOL_GPL(add_uevent_var); static int __init kobject_uevent_init(void) { uevent_sock = netlink_kernel_create(NETLINK_KOBJECT_UEVENT, 1, NULL, - THIS_MODULE); + NULL, THIS_MODULE); if (!uevent_sock) { printk(KERN_ERR diff -puN net/802/fddi.c~git-net net/802/fddi.c --- a/net/802/fddi.c~git-net +++ a/net/802/fddi.c @@ -100,7 +100,7 @@ static int fddi_rebuild_header(struct sk struct fddihdr *fddi = (struct fddihdr *)skb->data; #ifdef CONFIG_INET - if (fddi->hdr.llc_snap.ethertype == __constant_htons(ETH_P_IP)) + if (fddi->hdr.llc_snap.ethertype == htons(ETH_P_IP)) /* Try to get ARP to resolve the header and fill destination address */ return arp_find(fddi->daddr, skb); else @@ -130,12 +130,13 @@ __be16 fddi_type_trans(struct sk_buff *s * to start of packet data. Assume 802.2 SNAP frames for now. */ - skb->mac.raw = skb->data; /* point to frame control (FC) */ + skb->dev = dev; + skb_reset_mac_header(skb); /* point to frame control (FC) */ if(fddi->hdr.llc_8022_1.dsap==0xe0) { skb_pull(skb, FDDI_K_8022_HLEN-3); - type = __constant_htons(ETH_P_802_2); + type = htons(ETH_P_802_2); } else { diff -puN net/802/hippi.c~git-net net/802/hippi.c --- a/net/802/hippi.c~git-net +++ a/net/802/hippi.c @@ -60,7 +60,7 @@ static int hippi_header(struct sk_buff * * Due to the stupidity of the little endian byte-order we * have to set the fp field this way. */ - hip->fp.fixed = __constant_htonl(0x04800018); + hip->fp.fixed = htonl(0x04800018); hip->fp.d2_size = htonl(len + 8); hip->le.fc = 0; hip->le.double_wide = 0; /* only HIPPI 800 for the time being */ @@ -104,7 +104,7 @@ static int hippi_rebuild_header(struct s * Only IP is currently supported */ - if(hip->snap.ethertype != __constant_htons(ETH_P_IP)) + if(hip->snap.ethertype != htons(ETH_P_IP)) { printk(KERN_DEBUG "%s: unable to resolve type %X addresses.\n",skb->dev->name,ntohs(hip->snap.ethertype)); return 0; @@ -126,14 +126,14 @@ __be16 hippi_type_trans(struct sk_buff * { struct hippi_hdr *hip; - hip = (struct hippi_hdr *) skb->data; - /* * This is actually wrong ... question is if we really should * set the raw address here. */ - skb->mac.raw = skb->data; - skb_pull(skb, HIPPI_HLEN); + skb->dev = dev; + skb_reset_mac_header(skb); + hip = (struct hippi_hdr *)skb_mac_header(skb); + skb_pull(skb, HIPPI_HLEN); /* * No fancy promisc stuff here now. diff -puN net/802/psnap.c~git-net net/802/psnap.c --- a/net/802/psnap.c~git-net +++ a/net/802/psnap.c @@ -56,10 +56,10 @@ static int snap_rcv(struct sk_buff *skb, }; rcu_read_lock(); - proto = find_snap_client(skb->h.raw); + proto = find_snap_client(skb_transport_header(skb)); if (proto) { /* Pass the frame on. */ - skb->h.raw += 5; + skb->transport_header += 5; skb_pull_rcsum(skb, 5); rc = proto->rcvfunc(skb, dev, &snap_packet_type, orig_dev); } else { diff -puN net/802/tr.c~git-net net/802/tr.c --- a/net/802/tr.c~git-net +++ a/net/802/tr.c @@ -189,11 +189,13 @@ static int tr_rebuild_header(struct sk_b __be16 tr_type_trans(struct sk_buff *skb, struct net_device *dev) { - struct trh_hdr *trh=(struct trh_hdr *)skb->data; + struct trh_hdr *trh; struct trllc *trllc; unsigned riflen=0; - skb->mac.raw = skb->data; + skb->dev = dev; + skb_reset_mac_header(skb); + trh = tr_hdr(skb); if(trh->saddr[0] & TR_RII) riflen = (ntohs(trh->rcf) & TR_RCF_LEN_MASK) >> 8; @@ -552,7 +554,8 @@ static int rif_seq_show(struct seq_file if(j==1) { segment=ntohs(entry->rseg[j-1])>>4; seq_printf(seq," %03X",segment); - }; + } + segment=ntohs(entry->rseg[j])>>4; brdgnmb=ntohs(entry->rseg[j-1])&0x00f; seq_printf(seq,"-%01X-%03X",brdgnmb,segment); diff -puN net/8021q/vlan.c~git-net net/8021q/vlan.c --- a/net/8021q/vlan.c~git-net +++ a/net/8021q/vlan.c @@ -470,7 +470,7 @@ static struct net_device *register_vlan_ */ default: snprintf(name, IFNAMSIZ, "vlan%.4i", VLAN_ID); - }; + } new_dev = alloc_netdev(sizeof(struct vlan_dev_info), name, vlan_setup); @@ -685,7 +685,7 @@ static int vlan_device_event(struct noti break; } break; - }; + } out: return NOTIFY_DONE; @@ -819,7 +819,7 @@ static int vlan_ioctl_handler(void __use printk(VLAN_DBG "%s: Unknown VLAN CMD: %x \n", __FUNCTION__, args.cmd); return -EINVAL; - }; + } out: return err; } diff -puN net/8021q/vlan_dev.c~git-net net/8021q/vlan_dev.c --- a/net/8021q/vlan_dev.c~git-net +++ a/net/8021q/vlan_dev.c @@ -66,7 +66,7 @@ int vlan_dev_rebuild_header(struct sk_bu memcpy(veth->h_source, dev->dev_addr, ETH_ALEN); break; - }; + } return 0; } @@ -83,7 +83,7 @@ static inline struct sk_buff *vlan_check /* Lifted from Gleb's VLAN code... */ memmove(skb->data - ETH_HLEN, skb->data - VLAN_ETH_HLEN, 12); - skb->mac.raw += VLAN_HLEN; + skb->mac_header += VLAN_HLEN; } } @@ -219,7 +219,7 @@ int vlan_skb_recv(struct sk_buff *skb, s break; default: break; - }; + } /* Was a VLAN packet, grab the encapsulated protocol, which the layer * three protocols care about. @@ -258,7 +258,7 @@ int vlan_skb_recv(struct sk_buff *skb, s * won't work for fault tolerant netware but does for the rest. */ if (*(unsigned short *)rawp == 0xFFFF) { - skb->protocol = __constant_htons(ETH_P_802_3); + skb->protocol = htons(ETH_P_802_3); /* place it back on the queue to be handled by true layer 3 protocols. */ @@ -281,7 +281,7 @@ int vlan_skb_recv(struct sk_buff *skb, s /* * Real 802.2 LLC */ - skb->protocol = __constant_htons(ETH_P_802_2); + skb->protocol = htons(ETH_P_802_2); /* place it back on the queue to be handled by upper layer protocols. */ @@ -382,7 +382,7 @@ int vlan_dev_hard_header(struct sk_buff } skb->protocol = htons(ETH_P_8021Q); - skb->nh.raw = skb->data; + skb_reset_network_header(skb); } /* Before delegating work to the lower layer, enter our MAC-address */ @@ -448,7 +448,7 @@ int vlan_dev_hard_start_xmit(struct sk_b * OTHER THINGS LIKE FDDI/TokenRing/802.3 SNAPs... */ - if (veth->h_vlan_proto != __constant_htons(ETH_P_8021Q)) { + if (veth->h_vlan_proto != htons(ETH_P_8021Q)) { int orig_headroom = skb_headroom(skb); unsigned short veth_TCI; diff -puN net/Kconfig~git-net net/Kconfig --- a/net/Kconfig~git-net +++ a/net/Kconfig @@ -27,13 +27,6 @@ if NET menu "Networking options" -config NETDEBUG - bool "Network packet debugging" - help - You can say Y here if you want to get additional messages useful in - debugging bad packets, but can overwhelm logs under denial of service - attacks. - source "net/packet/Kconfig" source "net/unix/Kconfig" source "net/xfrm/Kconfig" @@ -219,14 +212,17 @@ endmenu source "net/ax25/Kconfig" source "net/irda/Kconfig" source "net/bluetooth/Kconfig" -source "net/ieee80211/Kconfig" - -config WIRELESS_EXT - bool config FIB_RULES bool +menu "Wireless" + +source "net/wireless/Kconfig" +source "net/ieee80211/Kconfig" + +endmenu + endif # if NET endmenu # Networking diff -puN net/Makefile~git-net net/Makefile --- a/net/Makefile~git-net +++ a/net/Makefile @@ -52,3 +52,5 @@ obj-$(CONFIG_IUCV) += iucv/ ifeq ($(CONFIG_NET),y) obj-$(CONFIG_SYSCTL) += sysctl_net.o endif + +obj-y += wireless/ diff -puN net/appletalk/aarp.c~git-net net/appletalk/aarp.c --- a/net/appletalk/aarp.c~git-net +++ a/net/appletalk/aarp.c @@ -118,7 +118,9 @@ static void __aarp_send_query(struct aar /* Set up the buffer */ skb_reserve(skb, dev->hard_header_len + aarp_dl->header_length); - skb->nh.raw = skb->h.raw = skb_put(skb, sizeof(*eah)); + skb_reset_network_header(skb); + skb_reset_transport_header(skb); + skb_put(skb, sizeof(*eah)); skb->protocol = htons(ETH_P_ATALK); skb->dev = dev; eah = aarp_hdr(skb); @@ -163,7 +165,9 @@ static void aarp_send_reply(struct net_d /* Set up the buffer */ skb_reserve(skb, dev->hard_header_len + aarp_dl->header_length); - skb->nh.raw = skb->h.raw = skb_put(skb, sizeof(*eah)); + skb_reset_network_header(skb); + skb_reset_transport_header(skb); + skb_put(skb, sizeof(*eah)); skb->protocol = htons(ETH_P_ATALK); skb->dev = dev; eah = aarp_hdr(skb); @@ -212,7 +216,9 @@ static void aarp_send_probe(struct net_d /* Set up the buffer */ skb_reserve(skb, dev->hard_header_len + aarp_dl->header_length); - skb->nh.raw = skb->h.raw = skb_put(skb, sizeof(*eah)); + skb_reset_network_header(skb); + skb_reset_transport_header(skb); + skb_put(skb, sizeof(*eah)); skb->protocol = htons(ETH_P_ATALK); skb->dev = dev; eah = aarp_hdr(skb); @@ -539,7 +545,7 @@ int aarp_send_ddp(struct net_device *dev int hash; struct aarp_entry *a; - skb->nh.raw = skb->data; + skb_reset_network_header(skb); /* Check for LocalTalk first */ if (dev->type == ARPHRD_LOCALTLK) { diff -puN net/appletalk/ddp.c~git-net net/appletalk/ddp.c --- a/net/appletalk/ddp.c~git-net +++ a/net/appletalk/ddp.c @@ -1275,7 +1275,7 @@ static int handle_ip_over_ddp(struct sk_ skb->protocol = htons(ETH_P_IP); skb_pull(skb, 13); skb->dev = dev; - skb->h.raw = skb->data; + skb_reset_transport_header(skb); stats = dev->priv; stats->rx_packets++; @@ -1383,10 +1383,10 @@ free_it: * @pt - packet type * * Receive a packet (in skb) from device dev. This has come from the SNAP - * decoder, and on entry skb->h.raw is the DDP header, skb->len is the DDP - * header, skb->len is the DDP length. The physical headers have been - * extracted. PPP should probably pass frames marked as for this layer. - * [ie ARPHRD_ETHERTALK] + * decoder, and on entry skb->transport_header is the DDP header, skb->len + * is the DDP header, skb->len is the DDP length. The physical headers + * have been extracted. PPP should probably pass frames marked as for this + * layer. [ie ARPHRD_ETHERTALK] */ static int atalk_rcv(struct sk_buff *skb, struct net_device *dev, struct packet_type *pt, struct net_device *orig_dev) @@ -1484,7 +1484,7 @@ static int ltalk_rcv(struct sk_buff *skb struct packet_type *pt, struct net_device *orig_dev) { /* Expand any short form frames */ - if (skb->mac.raw[2] == 1) { + if (skb_mac_header(skb)[2] == 1) { struct ddpehdr *ddp; /* Find our address */ struct atalk_addr *ap = atalk_find_dev_addr(dev); @@ -1510,8 +1510,8 @@ static int ltalk_rcv(struct sk_buff *skb * we write the network numbers ! */ - ddp->deh_dnode = skb->mac.raw[0]; /* From physical header */ - ddp->deh_snode = skb->mac.raw[1]; /* From physical header */ + ddp->deh_dnode = skb_mac_header(skb)[0]; /* From physical header */ + ddp->deh_snode = skb_mac_header(skb)[1]; /* From physical header */ ddp->deh_dnet = ap->s_net; /* Network number */ ddp->deh_snet = ap->s_net; @@ -1522,7 +1522,7 @@ static int ltalk_rcv(struct sk_buff *skb /* Non routable, so force a drop if we slip up later */ ddp->deh_len_hops = htons(skb->len + (DDP_MAXHOPS << 10)); } - skb->h.raw = skb->data; + skb_reset_transport_header(skb); return atalk_rcv(skb, dev, pt, orig_dev); freeit: @@ -1771,6 +1771,9 @@ static int atalk_ioctl(struct socket *so case SIOCGSTAMP: rc = sock_get_timestamp(sk, argp); break; + case SIOCGSTAMPNS: + rc = sock_get_timestampns(sk, argp); + break; /* Routing */ case SIOCADDRT: case SIOCDELRT: diff -puN net/atm/br2684.c~git-net net/atm/br2684.c --- a/net/atm/br2684.c~git-net +++ a/net/atm/br2684.c @@ -173,7 +173,7 @@ static int br2684_xmit_vcc(struct sk_buf } skb_push(skb, minheadroom); if (brvcc->encaps == e_llc) - memcpy(skb->data, llc_oui_pid_pad, 10); + skb_copy_to_linear_data(skb, llc_oui_pid_pad, 10); else memset(skb->data, 0, 2); #endif /* FASTER_VERSION */ @@ -375,11 +375,11 @@ packet_fails_filter(__be16 type, struct { if (brvcc->filter.netmask == 0) return 0; /* no filter in place */ - if (type == __constant_htons(ETH_P_IP) && + if (type == htons(ETH_P_IP) && (((struct iphdr *) (skb->data))->daddr & brvcc->filter. netmask) == brvcc->filter.prefix) return 0; - if (type == __constant_htons(ETH_P_ARP)) + if (type == htons(ETH_P_ARP)) return 0; /* TODO: we should probably filter ARPs too.. don't want to have * them returning values that don't make sense, or is that ok? @@ -458,7 +458,7 @@ static void br2684_push(struct atm_vcc * /* FIXME: tcpdump shows that pointer to mac header is 2 bytes earlier, than should be. What else should I set? */ skb_pull(skb, plen); - skb->mac.raw = ((char *) (skb->data)) - ETH_HLEN; + skb_set_mac_header(skb, -ETH_HLEN); skb->pkt_type = PACKET_HOST; #ifdef CONFIG_BR2684_FAST_TRANS skb->protocol = ((u16 *) skb->data)[-1]; diff -puN net/atm/clip.c~git-net net/atm/clip.c --- a/net/atm/clip.c~git-net +++ a/net/atm/clip.c @@ -213,7 +213,7 @@ static void clip_push(struct atm_vcc *vc return; } ATM_SKB(skb)->vcc = vcc; - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); if (!clip_vcc->encap || skb->len < RFC1483LLC_LEN || memcmp(skb->data, llc_oui, sizeof (llc_oui))) diff -puN net/atm/ioctl.c~git-net net/atm/ioctl.c --- a/net/atm/ioctl.c~git-net +++ a/net/atm/ioctl.c @@ -82,6 +82,9 @@ int vcc_ioctl(struct socket *sock, unsig case SIOCGSTAMP: /* borrowed from IP */ error = sock_get_timestamp(sk, argp); goto done; + case SIOCGSTAMPNS: /* borrowed from IP */ + error = sock_get_timestampns(sk, argp); + goto done; case ATM_SETSC: printk(KERN_WARNING "ATM_SETSC is obsolete\n"); error = 0; diff -puN net/atm/lec.c~git-net net/atm/lec.c --- a/net/atm/lec.c~git-net +++ a/net/atm/lec.c @@ -283,8 +283,8 @@ static int lec_start_xmit(struct sk_buff } DPRINTK("skbuff head:%lx data:%lx tail:%lx end:%lx\n", - (long)skb->head, (long)skb->data, (long)skb->tail, - (long)skb->end); + (long)skb->head, (long)skb->data, (long)skb_tail_pointer(skb), + (long)skb_end_pointer(skb)); #if defined(CONFIG_BRIDGE) || defined(CONFIG_BRIDGE_MODULE) if (memcmp(skb->data, bridge_ula_lec, sizeof(bridge_ula_lec)) == 0) lec_handle_bridge(skb, dev); @@ -576,8 +576,8 @@ static int lec_atm_send(struct atm_vcc * break; } skb2->len = sizeof(struct atmlec_msg); - memcpy(skb2->data, mesg, - sizeof(struct atmlec_msg)); + skb_copy_to_linear_data(skb2, mesg, + sizeof(*mesg)); atm_force_charge(priv->lecd, skb2->truesize); sk = sk_atm(priv->lecd); skb_queue_tail(&sk->sk_receive_queue, skb2); @@ -825,7 +825,6 @@ static void lec_push(struct atm_vcc *vcc if (!hlist_empty(&priv->lec_arp_empty_ones)) { lec_arp_check_empties(priv, vcc, skb); } - skb->dev = dev; skb_pull(skb, 2); /* skip lec_id */ #ifdef CONFIG_TR if (priv->is_trdev) @@ -1338,7 +1337,7 @@ static int lane2_resolve(struct net_devi if (skb == NULL) return -1; skb->len = *sizeoftlvs; - memcpy(skb->data, *tlvs, *sizeoftlvs); + skb_copy_to_linear_data(skb, *tlvs, *sizeoftlvs); retval = send_to_lecd(priv, l_arp_xmt, dst_mac, NULL, skb); } return retval; @@ -1372,7 +1371,7 @@ static int lane2_associate_req(struct ne if (skb == NULL) return 0; skb->len = sizeoftlvs; - memcpy(skb->data, tlvs, sizeoftlvs); + skb_copy_to_linear_data(skb, tlvs, sizeoftlvs); retval = send_to_lecd(priv, l_associate_req, NULL, NULL, skb); if (retval != 0) printk("lec.c: lane2_associate_req() failed\n"); diff -puN net/atm/mpc.c~git-net net/atm/mpc.c --- a/net/atm/mpc.c~git-net +++ a/net/atm/mpc.c @@ -504,11 +504,13 @@ static int send_via_shortcut(struct sk_b tagged_llc_snap_hdr.tag = entry->ctrl_info.tag; skb_pull(skb, ETH_HLEN); /* get rid of Eth header */ skb_push(skb, sizeof(tagged_llc_snap_hdr)); /* add LLC/SNAP header */ - memcpy(skb->data, &tagged_llc_snap_hdr, sizeof(tagged_llc_snap_hdr)); + skb_copy_to_linear_data(skb, &tagged_llc_snap_hdr, + sizeof(tagged_llc_snap_hdr)); } else { skb_pull(skb, ETH_HLEN); /* get rid of Eth header */ skb_push(skb, sizeof(struct llc_snap_hdr)); /* add LLC/SNAP header + tag */ - memcpy(skb->data, &llc_snap_mpoa_data, sizeof(struct llc_snap_hdr)); + skb_copy_to_linear_data(skb, &llc_snap_mpoa_data, + sizeof(struct llc_snap_hdr)); } atomic_add(skb->truesize, &sk_atm(entry->shortcut)->sk_wmem_alloc); @@ -711,11 +713,12 @@ static void mpc_push(struct atm_vcc *vcc return; } skb_push(new_skb, eg->ctrl_info.DH_length); /* add MAC header */ - memcpy(new_skb->data, eg->ctrl_info.DLL_header, eg->ctrl_info.DH_length); + skb_copy_to_linear_data(new_skb, eg->ctrl_info.DLL_header, + eg->ctrl_info.DH_length); new_skb->protocol = eth_type_trans(new_skb, dev); - new_skb->nh.raw = new_skb->data; + skb_reset_network_header(new_skb); - eg->latest_ip_addr = new_skb->nh.iph->saddr; + eg->latest_ip_addr = ip_hdr(new_skb)->saddr; eg->packets_rcvd++; mpc->eg_ops->put(eg); @@ -936,7 +939,7 @@ int msg_to_mpoad(struct k_message *mesg, if (skb == NULL) return -ENOMEM; skb_put(skb, sizeof(struct k_message)); - memcpy(skb->data, mesg, sizeof(struct k_message)); + skb_copy_to_linear_data(skb, mesg, sizeof(*mesg)); atm_force_charge(mpc->mpoad_vcc, skb->truesize); sk = sk_atm(mpc->mpoad_vcc); diff -puN net/ax25/af_ax25.c~git-net net/ax25/af_ax25.c --- a/net/ax25/af_ax25.c~git-net +++ a/net/ax25/af_ax25.c @@ -1127,22 +1127,22 @@ static int __must_check ax25_connect(str switch (sk->sk_state) { case TCP_SYN_SENT: /* still trying */ err = -EINPROGRESS; - goto out; + goto out_release; case TCP_ESTABLISHED: /* connection established */ sock->state = SS_CONNECTED; - goto out; + goto out_release; case TCP_CLOSE: /* connection refused */ sock->state = SS_UNCONNECTED; err = -ECONNREFUSED; - goto out; + goto out_release; } } if (sk->sk_state == TCP_ESTABLISHED && sk->sk_type == SOCK_SEQPACKET) { err = -EISCONN; /* No reconnect on a seqpacket socket */ - goto out; + goto out_release; } sk->sk_state = TCP_CLOSE; @@ -1159,12 +1159,12 @@ static int __must_check ax25_connect(str /* Valid number of digipeaters ? */ if (fsa->fsa_ax25.sax25_ndigis < 1 || fsa->fsa_ax25.sax25_ndigis > AX25_MAX_DIGIS) { err = -EINVAL; - goto out; + goto out_release; } if ((digi = kmalloc(sizeof(ax25_digi), GFP_KERNEL)) == NULL) { err = -ENOBUFS; - goto out; + goto out_release; } digi->ndigi = fsa->fsa_ax25.sax25_ndigis; @@ -1194,7 +1194,7 @@ static int __must_check ax25_connect(str current->comm); if ((err = ax25_rt_autobind(ax25, &fsa->fsa_ax25.sax25_call)) < 0) { kfree(digi); - goto out; + goto out_release; } ax25_fillin_cb(ax25, ax25->ax25_dev); @@ -1203,7 +1203,7 @@ static int __must_check ax25_connect(str if (ax25->ax25_dev == NULL) { kfree(digi); err = -EHOSTUNREACH; - goto out; + goto out_release; } } @@ -1213,7 +1213,7 @@ static int __must_check ax25_connect(str kfree(digi); err = -EADDRINUSE; /* Already such a connection */ ax25_cb_put(ax25t); - goto out; + goto out_release; } ax25->dest_addr = fsa->fsa_ax25.sax25_call; @@ -1223,7 +1223,7 @@ static int __must_check ax25_connect(str if (sk->sk_type != SOCK_SEQPACKET) { sock->state = SS_CONNECTED; sk->sk_state = TCP_ESTABLISHED; - goto out; + goto out_release; } /* Move to connecting socket, ax.25 lapb WAIT_UA.. */ @@ -1255,55 +1255,53 @@ static int __must_check ax25_connect(str /* Now the loop */ if (sk->sk_state != TCP_ESTABLISHED && (flags & O_NONBLOCK)) { err = -EINPROGRESS; - goto out; + goto out_release; } if (sk->sk_state == TCP_SYN_SENT) { - struct task_struct *tsk = current; - DECLARE_WAITQUEUE(wait, tsk); + DEFINE_WAIT(wait); - add_wait_queue(sk->sk_sleep, &wait); for (;;) { + prepare_to_wait(sk->sk_sleep, &wait, + TASK_INTERRUPTIBLE); if (sk->sk_state != TCP_SYN_SENT) break; - set_current_state(TASK_INTERRUPTIBLE); - release_sock(sk); - if (!signal_pending(tsk)) { + if (!signal_pending(current)) { + release_sock(sk); schedule(); lock_sock(sk); continue; } - current->state = TASK_RUNNING; - remove_wait_queue(sk->sk_sleep, &wait); - return -ERESTARTSYS; + err = -ERESTARTSYS; + break; } - current->state = TASK_RUNNING; - remove_wait_queue(sk->sk_sleep, &wait); + finish_wait(sk->sk_sleep, &wait); + + if (err) + goto out_release; } if (sk->sk_state != TCP_ESTABLISHED) { /* Not in ABM, not in WAIT_UA -> failed */ sock->state = SS_UNCONNECTED; err = sock_error(sk); /* Always set at this point */ - goto out; + goto out_release; } sock->state = SS_CONNECTED; - err=0; -out: + err = 0; +out_release: release_sock(sk); return err; } - static int ax25_accept(struct socket *sock, struct socket *newsock, int flags) { - struct task_struct *tsk = current; - DECLARE_WAITQUEUE(wait, tsk); struct sk_buff *skb; struct sock *newsk; + DEFINE_WAIT(wait); struct sock *sk; int err = 0; @@ -1328,30 +1326,29 @@ static int ax25_accept(struct socket *so * The read queue this time is holding sockets ready to use * hooked into the SABM we saved */ - add_wait_queue(sk->sk_sleep, &wait); for (;;) { + prepare_to_wait(sk->sk_sleep, &wait, TASK_INTERRUPTIBLE); skb = skb_dequeue(&sk->sk_receive_queue); if (skb) break; - release_sock(sk); - current->state = TASK_INTERRUPTIBLE; if (flags & O_NONBLOCK) { - current->state = TASK_RUNNING; - remove_wait_queue(sk->sk_sleep, &wait); - return -EWOULDBLOCK; + err = -EWOULDBLOCK; + break; } - if (!signal_pending(tsk)) { + if (!signal_pending(current)) { + release_sock(sk); schedule(); lock_sock(sk); continue; } - current->state = TASK_RUNNING; - remove_wait_queue(sk->sk_sleep, &wait); - return -ERESTARTSYS; + err = -ERESTARTSYS; + break; } - current->state = TASK_RUNNING; - remove_wait_queue(sk->sk_sleep, &wait); + finish_wait(sk->sk_sleep, &wait); + + if (err) + goto out; newsk = skb->sk; newsk->sk_socket = newsock; @@ -1425,7 +1422,6 @@ static int ax25_sendmsg(struct kiocb *io struct sockaddr_ax25 sax; struct sk_buff *skb; ax25_digi dtmp, *dp; - unsigned char *asmptr; ax25_cb *ax25; size_t size; int lv, err, addr_len = msg->msg_namelen; @@ -1548,13 +1544,11 @@ static int ax25_sendmsg(struct kiocb *io goto out; } - skb->nh.raw = skb->data; + skb_reset_network_header(skb); /* Add the PID if one is not supplied by the user in the skb */ - if (!ax25->pidincl) { - asmptr = skb_push(skb, 1); - *asmptr = sk->sk_protocol; - } + if (!ax25->pidincl) + *skb_push(skb, 1) = sk->sk_protocol; SOCK_DEBUG(sk, "AX.25: Transmitting buffer\n"); @@ -1573,7 +1567,7 @@ static int ax25_sendmsg(struct kiocb *io goto out; } - asmptr = skb_push(skb, 1 + ax25_addr_size(dp)); + skb_push(skb, 1 + ax25_addr_size(dp)); SOCK_DEBUG(sk, "Building AX.25 Header (dp=%p).\n", dp); @@ -1581,17 +1575,17 @@ static int ax25_sendmsg(struct kiocb *io SOCK_DEBUG(sk, "Num digipeaters=%d\n", dp->ndigi); /* Build an AX.25 header */ - asmptr += (lv = ax25_addr_build(asmptr, &ax25->source_addr, - &sax.sax25_call, dp, - AX25_COMMAND, AX25_MODULUS)); + lv = ax25_addr_build(skb->data, &ax25->source_addr, &sax.sax25_call, + dp, AX25_COMMAND, AX25_MODULUS); SOCK_DEBUG(sk, "Built header (%d bytes)\n",lv); - skb->h.raw = asmptr; + skb_set_transport_header(skb, lv); - SOCK_DEBUG(sk, "base=%p pos=%p\n", skb->data, asmptr); + SOCK_DEBUG(sk, "base=%p pos=%p\n", + skb->data, skb_transport_header(skb)); - *asmptr = AX25_UI; + *skb_transport_header(skb) = AX25_UI; /* Datagram frames go straight out of the door as UI */ ax25_queue_xmit(skb, ax25->ax25_dev->dev); @@ -1631,8 +1625,8 @@ static int ax25_recvmsg(struct kiocb *io if (!ax25_sk(sk)->pidincl) skb_pull(skb, 1); /* Remove PID */ - skb->h.raw = skb->data; - copied = skb->len; + skb_reset_transport_header(skb); + copied = skb->len; if (copied > size) { copied = size; @@ -1645,9 +1639,10 @@ static int ax25_recvmsg(struct kiocb *io struct sockaddr_ax25 *sax = (struct sockaddr_ax25 *)msg->msg_name; ax25_digi digi; ax25_address src; + const unsigned char *mac = skb_mac_header(skb); - ax25_addr_parse(skb->mac.raw+1, skb->data-skb->mac.raw-1, &src, NULL, &digi, NULL, NULL); - + ax25_addr_parse(mac + 1, skb->data - mac - 1, &src, NULL, + &digi, NULL, NULL); sax->sax25_family = AF_AX25; /* We set this correctly, even though we may not let the application know the digi calls further down (because it @@ -1711,6 +1706,10 @@ static int ax25_ioctl(struct socket *soc res = sock_get_timestamp(sk, argp); break; + case SIOCGSTAMPNS: + res = sock_get_timestampns(sk, argp); + break; + case SIOCAX25ADDUID: /* Add a uid to the uid/call map table */ case SIOCAX25DELUID: /* Delete a uid from the uid/call map table */ case SIOCAX25GETUID: { diff -puN net/ax25/ax25_ds_subr.c~git-net net/ax25/ax25_ds_subr.c --- a/net/ax25/ax25_ds_subr.c~git-net +++ a/net/ax25/ax25_ds_subr.c @@ -136,7 +136,7 @@ static void ax25_kiss_cmd(ax25_dev *ax25 if ((skb = alloc_skb(2, GFP_ATOMIC)) == NULL) return; - skb->nh.raw = skb->data; + skb_reset_network_header(skb); p = skb_put(skb, 2); *p++ = cmd; diff -puN net/ax25/ax25_in.c~git-net net/ax25/ax25_in.c --- a/net/ax25/ax25_in.c~git-net +++ a/net/ax25/ax25_in.c @@ -61,12 +61,14 @@ static int ax25_rx_fragment(ax25_cb *ax2 skb_reserve(skbn, AX25_MAX_HEADER_LEN); skbn->dev = ax25->ax25_dev->dev; - skbn->h.raw = skbn->data; - skbn->nh.raw = skbn->data; + skb_reset_network_header(skbn); + skb_reset_transport_header(skbn); /* Copy data from the fragments */ while ((skbo = skb_dequeue(&ax25->frag_queue)) != NULL) { - memcpy(skb_put(skbn, skbo->len), skbo->data, skbo->len); + skb_copy_from_linear_data(skbo, + skb_put(skbn, skbo->len), + skbo->len); kfree_skb(skbo); } @@ -122,8 +124,8 @@ int ax25_rx_iframe(ax25_cb *ax25, struct } skb_pull(skb, 1); /* Remove PID */ - skb->mac.raw = skb->nh.raw; - skb->nh.raw = skb->data; + skb_reset_mac_header(skb); + skb_reset_network_header(skb); skb->dev = ax25->ax25_dev->dev; skb->pkt_type = PACKET_HOST; skb->protocol = htons(ETH_P_IP); @@ -196,7 +198,7 @@ static int ax25_rcv(struct sk_buff *skb, * Process the AX.25/LAPB frame. */ - skb->h.raw = skb->data; + skb_reset_transport_header(skb); if ((ax25_dev = ax25_dev_ax25dev(dev)) == NULL) { kfree_skb(skb); @@ -233,7 +235,7 @@ static int ax25_rcv(struct sk_buff *skb, /* UI frame - bypass LAPB processing */ if ((*skb->data & ~0x10) == AX25_UI && dp.lastrepeat + 1 == dp.ndigi) { - skb->h.raw = skb->data + 2; /* skip control and pid */ + skb_set_transport_header(skb, 2); /* skip control and pid */ ax25_send_to_raw(&dest, skb, skb->data[1]); @@ -246,8 +248,8 @@ static int ax25_rcv(struct sk_buff *skb, switch (skb->data[1]) { case AX25_P_IP: skb_pull(skb,2); /* drop PID/CTRL */ - skb->h.raw = skb->data; - skb->nh.raw = skb->data; + skb_reset_transport_header(skb); + skb_reset_network_header(skb); skb->dev = dev; skb->pkt_type = PACKET_HOST; skb->protocol = htons(ETH_P_IP); @@ -256,8 +258,8 @@ static int ax25_rcv(struct sk_buff *skb, case AX25_P_ARP: skb_pull(skb,2); - skb->h.raw = skb->data; - skb->nh.raw = skb->data; + skb_reset_transport_header(skb); + skb_reset_network_header(skb); skb->dev = dev; skb->pkt_type = PACKET_HOST; skb->protocol = htons(ETH_P_ARP); diff -puN net/ax25/ax25_ip.c~git-net net/ax25/ax25_ip.c --- a/net/ax25/ax25_ip.c~git-net +++ a/net/ax25/ax25_ip.c @@ -121,7 +121,7 @@ int ax25_rebuild_header(struct sk_buff * digipeat = route->digipeat; dev = route->dev; ip_mode = route->ip_mode; - }; + } if (dev == NULL) dev = skb->dev; @@ -171,7 +171,7 @@ int ax25_rebuild_header(struct sk_buff * src_c = *(ax25_address *)(bp + 8); skb_pull(ourskb, AX25_HEADER_LEN - 1); /* Keep PID */ - ourskb->nh.raw = ourskb->data; + skb_reset_network_header(ourskb); ax25=ax25_send_frame( ourskb, diff -puN net/ax25/ax25_out.c~git-net net/ax25/ax25_out.c --- a/net/ax25/ax25_out.c~git-net +++ a/net/ax25/ax25_out.c @@ -148,8 +148,9 @@ void ax25_output(ax25_cb *ax25, int pacl if (ka9qfrag == 1) { skb_reserve(skbn, frontlen + 2); - skbn->nh.raw = skbn->data + (skb->nh.raw - skb->data); - memcpy(skb_put(skbn, len), skb->data, len); + skb_set_network_header(skbn, + skb_network_offset(skb)); + skb_copy_from_linear_data(skb, skb_put(skbn, len), len); p = skb_push(skbn, 2); *p++ = AX25_P_SEGMENT; @@ -161,8 +162,9 @@ void ax25_output(ax25_cb *ax25, int pacl } } else { skb_reserve(skbn, frontlen + 1); - skbn->nh.raw = skbn->data + (skb->nh.raw - skb->data); - memcpy(skb_put(skbn, len), skb->data, len); + skb_set_network_header(skbn, + skb_network_offset(skb)); + skb_copy_from_linear_data(skb, skb_put(skbn, len), len); p = skb_push(skbn, 1); *p = AX25_P_TEXT; } @@ -205,7 +207,7 @@ static void ax25_send_iframe(ax25_cb *ax if (skb == NULL) return; - skb->nh.raw = skb->data; + skb_reset_network_header(skb); if (ax25->modulus == AX25_MODULUS) { frame = skb_push(skb, 1); diff -puN net/ax25/ax25_subr.c~git-net net/ax25/ax25_subr.c --- a/net/ax25/ax25_subr.c~git-net +++ a/net/ax25/ax25_subr.c @@ -162,7 +162,7 @@ void ax25_send_control(ax25_cb *ax25, in skb_reserve(skb, ax25->ax25_dev->dev->hard_header_len); - skb->nh.raw = skb->data; + skb_reset_network_header(skb); /* Assume a response - address structure for DTE */ if (ax25->modulus == AX25_MODULUS) { @@ -205,7 +205,7 @@ void ax25_return_dm(struct net_device *d return; /* Next SABM will get DM'd */ skb_reserve(skb, dev->hard_header_len); - skb->nh.raw = skb->data; + skb_reset_network_header(skb); ax25_digi_invert(digi, &retdigi); diff -puN net/bluetooth/af_bluetooth.c~git-net net/bluetooth/af_bluetooth.c --- a/net/bluetooth/af_bluetooth.c~git-net +++ a/net/bluetooth/af_bluetooth.c @@ -221,7 +221,7 @@ int bt_sock_recvmsg(struct kiocb *iocb, copied = len; } - skb->h.raw = skb->data; + skb_reset_transport_header(skb); err = skb_copy_datagram_iovec(skb, 0, msg->msg_iov, copied); skb_free_datagram(sk, skb); diff -puN net/bluetooth/bnep/core.c~git-net net/bluetooth/bnep/core.c --- a/net/bluetooth/bnep/core.c~git-net +++ a/net/bluetooth/bnep/core.c @@ -326,7 +326,7 @@ static inline int bnep_rx_frame(struct b return 0; } - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); /* Verify and pull out header */ if (!skb_pull(skb, __bnep_rx_hlen[type & BNEP_TYPE_MASK])) @@ -364,26 +364,28 @@ static inline int bnep_rx_frame(struct b case BNEP_COMPRESSED_SRC_ONLY: memcpy(__skb_put(nskb, ETH_ALEN), s->eh.h_dest, ETH_ALEN); - memcpy(__skb_put(nskb, ETH_ALEN), skb->mac.raw, ETH_ALEN); + memcpy(__skb_put(nskb, ETH_ALEN), skb_mac_header(skb), ETH_ALEN); put_unaligned(s->eh.h_proto, (__be16 *) __skb_put(nskb, 2)); break; case BNEP_COMPRESSED_DST_ONLY: - memcpy(__skb_put(nskb, ETH_ALEN), skb->mac.raw, ETH_ALEN); - memcpy(__skb_put(nskb, ETH_ALEN + 2), s->eh.h_source, ETH_ALEN + 2); + memcpy(__skb_put(nskb, ETH_ALEN), skb_mac_header(skb), + ETH_ALEN); + memcpy(__skb_put(nskb, ETH_ALEN + 2), s->eh.h_source, + ETH_ALEN + 2); break; case BNEP_GENERAL: - memcpy(__skb_put(nskb, ETH_ALEN * 2), skb->mac.raw, ETH_ALEN * 2); + memcpy(__skb_put(nskb, ETH_ALEN * 2), skb_mac_header(skb), + ETH_ALEN * 2); put_unaligned(s->eh.h_proto, (__be16 *) __skb_put(nskb, 2)); break; } - memcpy(__skb_put(nskb, skb->len), skb->data, skb->len); + skb_copy_from_linear_data(skb, __skb_put(nskb, skb->len), skb->len); kfree_skb(skb); s->stats.rx_packets++; - nskb->dev = dev; nskb->ip_summed = CHECKSUM_NONE; nskb->protocol = eth_type_trans(nskb, dev); netif_rx_ni(nskb); diff -puN net/bluetooth/cmtp/core.c~git-net net/bluetooth/cmtp/core.c --- a/net/bluetooth/cmtp/core.c~git-net +++ a/net/bluetooth/cmtp/core.c @@ -124,7 +124,7 @@ static inline void cmtp_add_msgpart(stru } if (skb && (skb->len > 0)) - memcpy(skb_put(nskb, skb->len), skb->data, skb->len); + skb_copy_from_linear_data(skb, skb_put(nskb, skb->len), skb->len); memcpy(skb_put(nskb, count), buf, count); @@ -256,7 +256,7 @@ static void cmtp_process_transmit(struct hdr[2] = size >> 8; } - memcpy(skb_put(nskb, size), skb->data, size); + skb_copy_from_linear_data(skb, skb_put(nskb, size), size); skb_pull(skb, size); if (skb->len > 0) { diff -puN net/bluetooth/hci_conn.c~git-net net/bluetooth/hci_conn.c --- a/net/bluetooth/hci_conn.c~git-net +++ a/net/bluetooth/hci_conn.c @@ -72,11 +72,11 @@ void hci_acl_connect(struct hci_conn *co inquiry_entry_age(ie) <= INQUIRY_ENTRY_AGE_MAX) { cp.pscan_rep_mode = ie->data.pscan_rep_mode; cp.pscan_mode = ie->data.pscan_mode; - cp.clock_offset = ie->data.clock_offset | __cpu_to_le16(0x8000); + cp.clock_offset = ie->data.clock_offset | cpu_to_le16(0x8000); memcpy(conn->dev_class, ie->data.dev_class, 3); } - cp.pkt_type = __cpu_to_le16(hdev->pkt_type & ACL_PTYPE_MASK); + cp.pkt_type = cpu_to_le16(hdev->pkt_type & ACL_PTYPE_MASK); if (lmp_rswitch_capable(hdev) && !(hdev->link_mode & HCI_LM_MASTER)) cp.role_switch = 0x01; else @@ -107,7 +107,7 @@ void hci_acl_disconn(struct hci_conn *co conn->state = BT_DISCONN; - cp.handle = __cpu_to_le16(conn->handle); + cp.handle = cpu_to_le16(conn->handle); cp.reason = reason; hci_send_cmd(conn->hdev, OGF_LINK_CTL, OCF_DISCONNECT, sizeof(cp), &cp); @@ -123,8 +123,8 @@ void hci_add_sco(struct hci_conn *conn, conn->state = BT_CONNECT; conn->out = 1; - cp.pkt_type = __cpu_to_le16(hdev->pkt_type & SCO_PTYPE_MASK); - cp.handle = __cpu_to_le16(handle); + cp.pkt_type = cpu_to_le16(hdev->pkt_type & SCO_PTYPE_MASK); + cp.handle = cpu_to_le16(handle); hci_send_cmd(hdev, OGF_LINK_CTL, OCF_ADD_SCO, sizeof(cp), &cp); } @@ -348,7 +348,7 @@ int hci_conn_auth(struct hci_conn *conn) if (!test_and_set_bit(HCI_CONN_AUTH_PEND, &conn->pend)) { struct hci_cp_auth_requested cp; - cp.handle = __cpu_to_le16(conn->handle); + cp.handle = cpu_to_le16(conn->handle); hci_send_cmd(conn->hdev, OGF_LINK_CTL, OCF_AUTH_REQUESTED, sizeof(cp), &cp); } return 0; @@ -368,7 +368,7 @@ int hci_conn_encrypt(struct hci_conn *co if (hci_conn_auth(conn)) { struct hci_cp_set_conn_encrypt cp; - cp.handle = __cpu_to_le16(conn->handle); + cp.handle = cpu_to_le16(conn->handle); cp.encrypt = 1; hci_send_cmd(conn->hdev, OGF_LINK_CTL, OCF_SET_CONN_ENCRYPT, sizeof(cp), &cp); } @@ -383,7 +383,7 @@ int hci_conn_change_link_key(struct hci_ if (!test_and_set_bit(HCI_CONN_AUTH_PEND, &conn->pend)) { struct hci_cp_change_conn_link_key cp; - cp.handle = __cpu_to_le16(conn->handle); + cp.handle = cpu_to_le16(conn->handle); hci_send_cmd(conn->hdev, OGF_LINK_CTL, OCF_CHANGE_CONN_LINK_KEY, sizeof(cp), &cp); } return 0; @@ -423,7 +423,7 @@ void hci_conn_enter_active_mode(struct h if (!test_and_set_bit(HCI_CONN_MODE_CHANGE_PEND, &conn->pend)) { struct hci_cp_exit_sniff_mode cp; - cp.handle = __cpu_to_le16(conn->handle); + cp.handle = cpu_to_le16(conn->handle); hci_send_cmd(hdev, OGF_LINK_POLICY, OCF_EXIT_SNIFF_MODE, sizeof(cp), &cp); } @@ -452,21 +452,21 @@ void hci_conn_enter_sniff_mode(struct hc if (lmp_sniffsubr_capable(hdev) && lmp_sniffsubr_capable(conn)) { struct hci_cp_sniff_subrate cp; - cp.handle = __cpu_to_le16(conn->handle); - cp.max_latency = __constant_cpu_to_le16(0); - cp.min_remote_timeout = __constant_cpu_to_le16(0); - cp.min_local_timeout = __constant_cpu_to_le16(0); + cp.handle = cpu_to_le16(conn->handle); + cp.max_latency = cpu_to_le16(0); + cp.min_remote_timeout = cpu_to_le16(0); + cp.min_local_timeout = cpu_to_le16(0); hci_send_cmd(hdev, OGF_LINK_POLICY, OCF_SNIFF_SUBRATE, sizeof(cp), &cp); } if (!test_and_set_bit(HCI_CONN_MODE_CHANGE_PEND, &conn->pend)) { struct hci_cp_sniff_mode cp; - cp.handle = __cpu_to_le16(conn->handle); - cp.max_interval = __cpu_to_le16(hdev->sniff_max_interval); - cp.min_interval = __cpu_to_le16(hdev->sniff_min_interval); - cp.attempt = __constant_cpu_to_le16(4); - cp.timeout = __constant_cpu_to_le16(1); + cp.handle = cpu_to_le16(conn->handle); + cp.max_interval = cpu_to_le16(hdev->sniff_max_interval); + cp.min_interval = cpu_to_le16(hdev->sniff_min_interval); + cp.attempt = cpu_to_le16(4); + cp.timeout = cpu_to_le16(1); hci_send_cmd(hdev, OGF_LINK_POLICY, OCF_SNIFF_MODE, sizeof(cp), &cp); } diff -puN net/bluetooth/hci_core.c~git-net net/bluetooth/hci_core.c --- a/net/bluetooth/hci_core.c~git-net +++ a/net/bluetooth/hci_core.c @@ -149,7 +149,7 @@ static int __hci_request(struct hci_dev default: err = -ETIMEDOUT; break; - }; + } hdev->req_status = hdev->req_result = 0; @@ -216,10 +216,10 @@ static void hci_init_req(struct hci_dev /* Host buffer size */ { struct hci_cp_host_buffer_size cp; - cp.acl_mtu = __cpu_to_le16(HCI_MAX_ACL_SIZE); + cp.acl_mtu = cpu_to_le16(HCI_MAX_ACL_SIZE); cp.sco_mtu = HCI_MAX_SCO_SIZE; - cp.acl_max_pkt = __cpu_to_le16(0xffff); - cp.sco_max_pkt = __cpu_to_le16(0xffff); + cp.acl_max_pkt = cpu_to_le16(0xffff); + cp.sco_max_pkt = cpu_to_le16(0xffff); hci_send_cmd(hdev, OGF_HOST_CTL, OCF_HOST_BUFFER_SIZE, sizeof(cp), &cp); } #endif @@ -240,11 +240,11 @@ static void hci_init_req(struct hci_dev } /* Page timeout ~20 secs */ - param = __cpu_to_le16(0x8000); + param = cpu_to_le16(0x8000); hci_send_cmd(hdev, OGF_HOST_CTL, OCF_WRITE_PG_TIMEOUT, 2, ¶m); /* Connection accept timeout ~20 secs */ - param = __cpu_to_le16(0x7d00); + param = cpu_to_le16(0x7d00); hci_send_cmd(hdev, OGF_HOST_CTL, OCF_WRITE_CA_TIMEOUT, 2, ¶m); } @@ -1034,7 +1034,7 @@ int hci_send_cmd(struct hci_dev *hdev, _ } hdr = (struct hci_command_hdr *) skb_put(skb, HCI_COMMAND_HDR_SIZE); - hdr->opcode = __cpu_to_le16(hci_opcode_pack(ogf, ocf)); + hdr->opcode = cpu_to_le16(hci_opcode_pack(ogf, ocf)); hdr->plen = plen; if (plen) @@ -1060,7 +1060,7 @@ void *hci_sent_cmd_data(struct hci_dev * hdr = (void *) hdev->sent_cmd->data; - if (hdr->opcode != __cpu_to_le16(hci_opcode_pack(ogf, ocf))) + if (hdr->opcode != cpu_to_le16(hci_opcode_pack(ogf, ocf))) return NULL; BT_DBG("%s ogf 0x%x ocf 0x%x", hdev->name, ogf, ocf); @@ -1074,11 +1074,11 @@ static void hci_add_acl_hdr(struct sk_bu struct hci_acl_hdr *hdr; int len = skb->len; - hdr = (struct hci_acl_hdr *) skb_push(skb, HCI_ACL_HDR_SIZE); - hdr->handle = __cpu_to_le16(hci_handle_pack(handle, flags)); - hdr->dlen = __cpu_to_le16(len); - - skb->h.raw = (void *) hdr; + skb_push(skb, HCI_ACL_HDR_SIZE); + skb_reset_transport_header(skb); + hdr = (struct hci_acl_hdr *)skb_transport_header(skb); + hdr->handle = cpu_to_le16(hci_handle_pack(handle, flags)); + hdr->dlen = cpu_to_le16(len); } int hci_send_acl(struct hci_conn *conn, struct sk_buff *skb, __u16 flags) @@ -1140,11 +1140,12 @@ int hci_send_sco(struct hci_conn *conn, return -EINVAL; } - hdr.handle = __cpu_to_le16(conn->handle); + hdr.handle = cpu_to_le16(conn->handle); hdr.dlen = skb->len; - skb->h.raw = skb_push(skb, HCI_SCO_HDR_SIZE); - memcpy(skb->h.raw, &hdr, HCI_SCO_HDR_SIZE); + skb_push(skb, HCI_SCO_HDR_SIZE); + skb_reset_transport_header(skb); + memcpy(skb_transport_header(skb), &hdr, HCI_SCO_HDR_SIZE); skb->dev = (void *) hdev; bt_cb(skb)->pkt_type = HCI_SCODATA_PKT; @@ -1387,7 +1388,7 @@ static void hci_rx_task(unsigned long ar case HCI_SCODATA_PKT: kfree_skb(skb); continue; - }; + } } /* Process frame */ diff -puN net/bluetooth/hci_event.c~git-net net/bluetooth/hci_event.c --- a/net/bluetooth/hci_event.c~git-net +++ a/net/bluetooth/hci_event.c @@ -783,7 +783,7 @@ static inline void hci_conn_complete_evt if (conn->type == ACL_LINK && hdev->link_policy) { struct hci_cp_write_link_policy cp; cp.handle = ev->handle; - cp.policy = __cpu_to_le16(hdev->link_policy); + cp.policy = cpu_to_le16(hdev->link_policy); hci_send_cmd(hdev, OGF_LINK_POLICY, OCF_WRITE_LINK_POLICY, sizeof(cp), &cp); } @@ -793,8 +793,8 @@ static inline void hci_conn_complete_evt struct hci_cp_change_conn_ptype cp; cp.handle = ev->handle; cp.pkt_type = (conn->type == ACL_LINK) ? - __cpu_to_le16(hdev->pkt_type & ACL_PTYPE_MASK): - __cpu_to_le16(hdev->pkt_type & SCO_PTYPE_MASK); + cpu_to_le16(hdev->pkt_type & ACL_PTYPE_MASK): + cpu_to_le16(hdev->pkt_type & SCO_PTYPE_MASK); hci_send_cmd(hdev, OGF_LINK_CTL, OCF_CHANGE_CONN_PTYPE, sizeof(cp), &cp); @@ -970,7 +970,7 @@ static inline void hci_auth_complete_evt if (test_bit(HCI_CONN_ENCRYPT_PEND, &conn->pend)) { if (!ev->status) { struct hci_cp_set_conn_encrypt cp; - cp.handle = __cpu_to_le16(conn->handle); + cp.handle = cpu_to_le16(conn->handle); cp.encrypt = 1; hci_send_cmd(conn->hdev, OGF_LINK_CTL, OCF_SET_CONN_ENCRYPT, sizeof(cp), &cp); diff -puN net/bluetooth/hci_sock.c~git-net net/bluetooth/hci_sock.c --- a/net/bluetooth/hci_sock.c~git-net +++ a/net/bluetooth/hci_sock.c @@ -375,7 +375,7 @@ static int hci_sock_recvmsg(struct kiocb copied = len; } - skb->h.raw = skb->data; + skb_reset_transport_header(skb); err = skb_copy_datagram_iovec(skb, 0, msg->msg_iov, copied); hci_sock_cmsg(sk, msg, skb); diff -puN net/bluetooth/l2cap.c~git-net net/bluetooth/l2cap.c --- a/net/bluetooth/l2cap.c~git-net +++ a/net/bluetooth/l2cap.c @@ -459,8 +459,8 @@ static void __l2cap_sock_close(struct so sk->sk_state = BT_DISCONN; l2cap_sock_set_timer(sk, sk->sk_sndtimeo); - req.dcid = __cpu_to_le16(l2cap_pi(sk)->dcid); - req.scid = __cpu_to_le16(l2cap_pi(sk)->scid); + req.dcid = cpu_to_le16(l2cap_pi(sk)->dcid); + req.scid = cpu_to_le16(l2cap_pi(sk)->scid); l2cap_send_cmd(conn, l2cap_get_ident(conn), L2CAP_DISCONN_REQ, sizeof(req), &req); } else { @@ -652,7 +652,7 @@ static int l2cap_do_connect(struct sock if (sk->sk_type == SOCK_SEQPACKET) { struct l2cap_conn_req req; l2cap_pi(sk)->ident = l2cap_get_ident(conn); - req.scid = __cpu_to_le16(l2cap_pi(sk)->scid); + req.scid = cpu_to_le16(l2cap_pi(sk)->scid); req.psm = l2cap_pi(sk)->psm; l2cap_send_cmd(conn, l2cap_pi(sk)->ident, L2CAP_CONN_REQ, sizeof(req), &req); @@ -868,8 +868,8 @@ static inline int l2cap_do_send(struct s /* Create L2CAP header */ lh = (struct l2cap_hdr *) skb_put(skb, L2CAP_HDR_SIZE); - lh->cid = __cpu_to_le16(l2cap_pi(sk)->dcid); - lh->len = __cpu_to_le16(len + (hlen - L2CAP_HDR_SIZE)); + lh->cid = cpu_to_le16(l2cap_pi(sk)->dcid); + lh->len = cpu_to_le16(len + (hlen - L2CAP_HDR_SIZE)); if (sk->sk_type == SOCK_DGRAM) put_unaligned(l2cap_pi(sk)->psm, (u16 *) skb_put(skb, 2)); @@ -1096,7 +1096,7 @@ static void l2cap_conn_ready(struct l2ca } else if (sk->sk_state == BT_CONNECT) { struct l2cap_conn_req req; l2cap_pi(sk)->ident = l2cap_get_ident(conn); - req.scid = __cpu_to_le16(l2cap_pi(sk)->scid); + req.scid = cpu_to_le16(l2cap_pi(sk)->scid); req.psm = l2cap_pi(sk)->psm; l2cap_send_cmd(conn, l2cap_pi(sk)->ident, L2CAP_CONN_REQ, sizeof(req), &req); } @@ -1192,13 +1192,13 @@ static struct sk_buff *l2cap_build_cmd(s return NULL; lh = (struct l2cap_hdr *) skb_put(skb, L2CAP_HDR_SIZE); - lh->len = __cpu_to_le16(L2CAP_CMD_HDR_SIZE + dlen); - lh->cid = __cpu_to_le16(0x0001); + lh->len = cpu_to_le16(L2CAP_CMD_HDR_SIZE + dlen); + lh->cid = cpu_to_le16(0x0001); cmd = (struct l2cap_cmd_hdr *) skb_put(skb, L2CAP_CMD_HDR_SIZE); cmd->code = code; cmd->ident = ident; - cmd->len = __cpu_to_le16(dlen); + cmd->len = cpu_to_le16(dlen); if (dlen) { count -= L2CAP_HDR_SIZE + L2CAP_CMD_HDR_SIZE; @@ -1316,11 +1316,11 @@ static void l2cap_add_conf_opt(void **pt break; case 2: - *((u16 *) opt->val) = __cpu_to_le16(val); + *((u16 *) opt->val) = cpu_to_le16(val); break; case 4: - *((u32 *) opt->val) = __cpu_to_le32(val); + *((u32 *) opt->val) = cpu_to_le32(val); break; default: @@ -1346,8 +1346,8 @@ static int l2cap_build_conf_req(struct s //if (flush_to != L2CAP_DEFAULT_FLUSH_TO) // l2cap_add_conf_opt(&ptr, L2CAP_CONF_FLUSH_TO, 2, pi->flush_to); - req->dcid = __cpu_to_le16(pi->dcid); - req->flags = __cpu_to_le16(0); + req->dcid = cpu_to_le16(pi->dcid); + req->flags = cpu_to_le16(0); return ptr - data; } @@ -1383,9 +1383,9 @@ static int l2cap_build_conf_rsp(struct s else flags = 0x0001; - rsp->scid = __cpu_to_le16(l2cap_pi(sk)->dcid); - rsp->result = __cpu_to_le16(result ? *result : 0); - rsp->flags = __cpu_to_le16(flags); + rsp->scid = cpu_to_le16(l2cap_pi(sk)->dcid); + rsp->result = cpu_to_le16(result ? *result : 0); + rsp->flags = cpu_to_le16(flags); return ptr - data; } @@ -1470,10 +1470,10 @@ response: bh_unlock_sock(parent); sendresp: - rsp.scid = __cpu_to_le16(scid); - rsp.dcid = __cpu_to_le16(dcid); - rsp.result = __cpu_to_le16(result); - rsp.status = __cpu_to_le16(status); + rsp.scid = cpu_to_le16(scid); + rsp.dcid = cpu_to_le16(dcid); + rsp.result = cpu_to_le16(result); + rsp.status = cpu_to_le16(status); l2cap_send_cmd(conn, cmd->ident, L2CAP_CONN_RSP, sizeof(rsp), &rsp); return 0; } @@ -1613,8 +1613,8 @@ static inline int l2cap_config_rsp(struc l2cap_sock_set_timer(sk, HZ * 5); { struct l2cap_disconn_req req; - req.dcid = __cpu_to_le16(l2cap_pi(sk)->dcid); - req.scid = __cpu_to_le16(l2cap_pi(sk)->scid); + req.dcid = cpu_to_le16(l2cap_pi(sk)->dcid); + req.scid = cpu_to_le16(l2cap_pi(sk)->scid); l2cap_send_cmd(conn, l2cap_get_ident(conn), L2CAP_DISCONN_REQ, sizeof(req), &req); } @@ -1652,8 +1652,8 @@ static inline int l2cap_disconnect_req(s if (!(sk = l2cap_get_chan_by_scid(&conn->chan_list, dcid))) return 0; - rsp.dcid = __cpu_to_le16(l2cap_pi(sk)->scid); - rsp.scid = __cpu_to_le16(l2cap_pi(sk)->dcid); + rsp.dcid = cpu_to_le16(l2cap_pi(sk)->scid); + rsp.scid = cpu_to_le16(l2cap_pi(sk)->dcid); l2cap_send_cmd(conn, cmd->ident, L2CAP_DISCONN_RSP, sizeof(rsp), &rsp); sk->sk_shutdown = SHUTDOWN_MASK; @@ -1696,8 +1696,8 @@ static inline int l2cap_information_req( BT_DBG("type 0x%4.4x", type); - rsp.type = __cpu_to_le16(type); - rsp.result = __cpu_to_le16(L2CAP_IR_NOTSUPP); + rsp.type = cpu_to_le16(type); + rsp.result = cpu_to_le16(L2CAP_IR_NOTSUPP); l2cap_send_cmd(conn, cmd->ident, L2CAP_INFO_RSP, sizeof(rsp), &rsp); return 0; @@ -1794,7 +1794,7 @@ static inline void l2cap_sig_channel(str BT_DBG("error %d", err); /* FIXME: Map err to a valid reason */ - rej.reason = __cpu_to_le16(0); + rej.reason = cpu_to_le16(0); l2cap_send_cmd(conn, cmd.ident, L2CAP_COMMAND_REJ, sizeof(rej), &rej); } @@ -1993,10 +1993,10 @@ static int l2cap_auth_cfm(struct hci_con result = L2CAP_CR_SEC_BLOCK; } - rsp.scid = __cpu_to_le16(l2cap_pi(sk)->dcid); - rsp.dcid = __cpu_to_le16(l2cap_pi(sk)->scid); - rsp.result = __cpu_to_le16(result); - rsp.status = __cpu_to_le16(0); + rsp.scid = cpu_to_le16(l2cap_pi(sk)->dcid); + rsp.dcid = cpu_to_le16(l2cap_pi(sk)->scid); + rsp.result = cpu_to_le16(result); + rsp.status = cpu_to_le16(0); l2cap_send_cmd(conn, l2cap_pi(sk)->ident, L2CAP_CONN_RSP, sizeof(rsp), &rsp); @@ -2041,10 +2041,10 @@ static int l2cap_encrypt_cfm(struct hci_ result = L2CAP_CR_SEC_BLOCK; } - rsp.scid = __cpu_to_le16(l2cap_pi(sk)->dcid); - rsp.dcid = __cpu_to_le16(l2cap_pi(sk)->scid); - rsp.result = __cpu_to_le16(result); - rsp.status = __cpu_to_le16(0); + rsp.scid = cpu_to_le16(l2cap_pi(sk)->dcid); + rsp.dcid = cpu_to_le16(l2cap_pi(sk)->scid); + rsp.result = cpu_to_le16(result); + rsp.status = cpu_to_le16(0); l2cap_send_cmd(conn, l2cap_pi(sk)->ident, L2CAP_CONN_RSP, sizeof(rsp), &rsp); @@ -2107,7 +2107,8 @@ static int l2cap_recv_acldata(struct hci if (!(conn->rx_skb = bt_skb_alloc(len, GFP_ATOMIC))) goto drop; - memcpy(skb_put(conn->rx_skb, skb->len), skb->data, skb->len); + skb_copy_from_linear_data(skb, skb_put(conn->rx_skb, skb->len), + skb->len); conn->rx_len = len - skb->len; } else { BT_DBG("Cont: frag len %d (expecting %d)", skb->len, conn->rx_len); @@ -2128,7 +2129,8 @@ static int l2cap_recv_acldata(struct hci goto drop; } - memcpy(skb_put(conn->rx_skb, skb->len), skb->data, skb->len); + skb_copy_from_linear_data(skb, skb_put(conn->rx_skb, skb->len), + skb->len); conn->rx_len -= skb->len; if (!conn->rx_len) { diff -puN net/bluetooth/rfcomm/core.c~git-net net/bluetooth/rfcomm/core.c --- a/net/bluetooth/rfcomm/core.c~git-net +++ a/net/bluetooth/rfcomm/core.c @@ -1567,7 +1567,7 @@ static int rfcomm_recv_frame(struct rfco /* Trim FCS */ skb->len--; skb->tail--; - fcs = *(u8 *) skb->tail; + fcs = *(u8 *)skb_tail_pointer(skb); if (__check_fcs(skb->data, type, fcs)) { BT_ERR("bad checksum in packet"); diff -puN net/bluetooth/sco.c~git-net net/bluetooth/sco.c --- a/net/bluetooth/sco.c~git-net +++ a/net/bluetooth/sco.c @@ -393,7 +393,7 @@ static void sco_sock_close(struct sock * default: sock_set_flag(sk, SOCK_ZAPPED); break; - }; + } release_sock(sk); diff -puN net/bridge/br.c~git-net net/bridge/br.c --- a/net/bridge/br.c~git-net +++ a/net/bridge/br.c @@ -37,7 +37,9 @@ static int __init br_init(void) return -EADDRINUSE; } - br_fdb_init(); + err = br_fdb_init(); + if (err) + goto err_out1; err = br_netfilter_init(); if (err) @@ -47,7 +49,10 @@ static int __init br_init(void) if (err) goto err_out2; - br_netlink_init(); + err = br_netlink_init(); + if (err) + goto err_out3; + brioctl_set(br_ioctl_deviceless_stub); br_handle_frame_hook = br_handle_frame; @@ -55,7 +60,8 @@ static int __init br_init(void) br_fdb_put_hook = br_fdb_put; return 0; - +err_out3: + unregister_netdevice_notifier(&br_device_notifier); err_out2: br_netfilter_fini(); err_out1: diff -puN net/bridge/br_device.c~git-net net/bridge/br_device.c --- a/net/bridge/br_device.c~git-net +++ a/net/bridge/br_device.c @@ -37,7 +37,7 @@ int br_dev_xmit(struct sk_buff *skb, str br->statistics.tx_packets++; br->statistics.tx_bytes += skb->len; - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); skb_pull(skb, ETH_HLEN); if (dest[0] & 1) @@ -83,27 +83,21 @@ static int br_change_mtu(struct net_devi return 0; } -/* Allow setting mac address of pseudo-bridge to be same as - * any of the bound interfaces - */ +/* Allow setting mac address to any valid ethernet address. */ static int br_set_mac_address(struct net_device *dev, void *p) { struct net_bridge *br = netdev_priv(dev); struct sockaddr *addr = p; - struct net_bridge_port *port; - int err = -EADDRNOTAVAIL; + + if (!is_valid_ether_addr(addr->sa_data)) + return -EINVAL; spin_lock_bh(&br->lock); - list_for_each_entry(port, &br->port_list, list) { - if (!compare_ether_addr(port->dev->dev_addr, addr->sa_data)) { - br_stp_change_bridge_id(br, addr->sa_data); - err = 0; - break; - } - } + memcpy(dev->dev_addr, addr->sa_data, ETH_ALEN); + br_stp_change_bridge_id(br, addr->sa_data); spin_unlock_bh(&br->lock); - return err; + return 0; } static void br_getinfo(struct net_device *dev, struct ethtool_drvinfo *info) diff -puN net/bridge/br_fdb.c~git-net net/bridge/br_fdb.c --- a/net/bridge/br_fdb.c~git-net +++ a/net/bridge/br_fdb.c @@ -20,19 +20,28 @@ #include #include #include +#include #include +#include #include "br_private.h" static struct kmem_cache *br_fdb_cache __read_mostly; static int fdb_insert(struct net_bridge *br, struct net_bridge_port *source, const unsigned char *addr); -void __init br_fdb_init(void) +static u32 fdb_salt __read_mostly; + +int __init br_fdb_init(void) { br_fdb_cache = kmem_cache_create("bridge_fdb_cache", sizeof(struct net_bridge_fdb_entry), 0, SLAB_HWCACHE_ALIGN, NULL, NULL); + if (!br_fdb_cache) + return -ENOMEM; + + get_random_bytes(&fdb_salt, sizeof(fdb_salt)); + return 0; } void __exit br_fdb_fini(void) @@ -44,24 +53,26 @@ void __exit br_fdb_fini(void) /* if topology_changing then use forward_delay (default 15 sec) * otherwise keep longer (default 5 minutes) */ -static __inline__ unsigned long hold_time(const struct net_bridge *br) +static inline unsigned long hold_time(const struct net_bridge *br) { return br->topology_change ? br->forward_delay : br->ageing_time; } -static __inline__ int has_expired(const struct net_bridge *br, +static inline int has_expired(const struct net_bridge *br, const struct net_bridge_fdb_entry *fdb) { return !fdb->is_static && time_before_eq(fdb->ageing_timer + hold_time(br), jiffies); } -static __inline__ int br_mac_hash(const unsigned char *mac) +static inline int br_mac_hash(const unsigned char *mac) { - return jhash(mac, ETH_ALEN, 0) & (BR_HASH_SIZE - 1); + /* use 1 byte of OUI cnd 3 bytes of NIC */ + u32 key = get_unaligned((u32 *)(mac + 2)); + return jhash_1word(key, fdb_salt) & (BR_HASH_SIZE - 1); } -static __inline__ void fdb_delete(struct net_bridge_fdb_entry *f) +static inline void fdb_delete(struct net_bridge_fdb_entry *f) { hlist_del_rcu(&f->hlist); br_fdb_put(f); @@ -128,7 +139,26 @@ void br_fdb_cleanup(unsigned long _data) mod_timer(&br->gc_timer, jiffies + HZ/10); } +/* Completely flush all dynamic entries in forwarding database.*/ +void br_fdb_flush(struct net_bridge *br) +{ + int i; + spin_lock_bh(&br->hash_lock); + for (i = 0; i < BR_HASH_SIZE; i++) { + struct net_bridge_fdb_entry *f; + struct hlist_node *h, *n; + hlist_for_each_entry_safe(f, h, n, &br->hash[i], hlist) { + if (!f->is_static) + fdb_delete(f); + } + } + spin_unlock_bh(&br->hash_lock); +} + +/* Flush all entries refering to a specific port. + * if do_all is set also flush static entries + */ void br_fdb_delete_by_port(struct net_bridge *br, const struct net_bridge_port *p, int do_all) diff -puN net/bridge/br_forward.c~git-net net/bridge/br_forward.c --- a/net/bridge/br_forward.c~git-net +++ a/net/bridge/br_forward.c @@ -71,7 +71,7 @@ static void __br_forward(const struct ne indev = skb->dev; skb->dev = to->dev; - skb->ip_summed = CHECKSUM_NONE; + skb_forward_csum(skb); NF_HOOK(PF_BRIDGE, NF_BR_FORWARD, skb, indev, skb->dev, br_forward_finish); diff -puN net/bridge/br_if.c~git-net net/bridge/br_if.c --- a/net/bridge/br_if.c~git-net +++ a/net/bridge/br_if.c @@ -152,6 +152,8 @@ static void del_nbp(struct net_bridge_po br_stp_disable_port(p); spin_unlock_bh(&br->lock); + br_ifinfo_notify(RTM_DELLINK, p); + br_fdb_delete_by_port(br, p, 1); list_del_rcu(&p->list); @@ -203,7 +205,7 @@ static struct net_device *new_bridge_dev memcpy(br->group_addr, br_group_address, ETH_ALEN); br->feature_mask = dev->features; - br->stp_enabled = 0; + br->stp_enabled = BR_NO_STP; br->designated_root = br->bridge_id; br->root_path_cost = 0; br->root_port = 0; @@ -434,6 +436,8 @@ int br_add_if(struct net_bridge *br, str br_stp_enable_port(p); spin_unlock_bh(&br->lock); + br_ifinfo_notify(RTM_NEWLINK, p); + dev_set_mtu(br->dev, br_min_mtu(br)); kobject_uevent(&p->kobj, KOBJ_ADD); diff -puN net/bridge/br_input.c~git-net net/bridge/br_input.c --- a/net/bridge/br_input.c~git-net +++ a/net/bridge/br_input.c @@ -112,46 +112,51 @@ static int br_handle_local_finish(struct */ static inline int is_link_local(const unsigned char *dest) { - return memcmp(dest, br_group_address, 5) == 0 && (dest[5] & 0xf0) == 0; + const u16 *a = (const u16 *) dest; + static const u16 *const b = (const u16 *const ) br_group_address; + static const u16 m = __constant_cpu_to_be16(0xfff0); + + return ((a[0] ^ b[0]) | (a[1] ^ b[1]) | ((a[2] ^ b[2]) & m)) == 0; } /* * Called via br_handle_frame_hook. - * Return 0 if *pskb should be processed furthur - * 1 if *pskb is handled + * Return NULL if skb is handled * note: already called with rcu_read_lock (preempt_disabled) */ -int br_handle_frame(struct net_bridge_port *p, struct sk_buff **pskb) +struct sk_buff *br_handle_frame(struct net_bridge_port *p, struct sk_buff *skb) { - struct sk_buff *skb = *pskb; const unsigned char *dest = eth_hdr(skb)->h_dest; if (!is_valid_ether_addr(eth_hdr(skb)->h_source)) - goto err; + goto drop; if (unlikely(is_link_local(dest))) { skb->pkt_type = PACKET_HOST; - return NF_HOOK(PF_BRIDGE, NF_BR_LOCAL_IN, skb, skb->dev, - NULL, br_handle_local_finish) != 0; + + return (NF_HOOK(PF_BRIDGE, NF_BR_LOCAL_IN, skb, skb->dev, + NULL, br_handle_local_finish) == 0) ? skb : NULL; } - if (p->state == BR_STATE_FORWARDING || p->state == BR_STATE_LEARNING) { + switch (p->state) { + case BR_STATE_FORWARDING: + if (br_should_route_hook) { - if (br_should_route_hook(pskb)) - return 0; - skb = *pskb; + if (br_should_route_hook(&skb)) + return skb; dest = eth_hdr(skb)->h_dest; } - + /* fall through */ + case BR_STATE_LEARNING: if (!compare_ether_addr(p->br->dev->dev_addr, dest)) skb->pkt_type = PACKET_HOST; NF_HOOK(PF_BRIDGE, NF_BR_PRE_ROUTING, skb, skb->dev, NULL, br_handle_frame_finish); - return 1; + break; + default: +drop: + kfree_skb(skb); } - -err: - kfree_skb(skb); - return 1; + return NULL; } diff -puN net/bridge/br_ioctl.c~git-net net/bridge/br_ioctl.c --- a/net/bridge/br_ioctl.c~git-net +++ a/net/bridge/br_ioctl.c @@ -137,7 +137,8 @@ static int old_dev_ioctl(struct net_devi b.topology_change = br->topology_change; b.topology_change_detected = br->topology_change_detected; b.root_port = br->root_port; - b.stp_enabled = br->stp_enabled; + + b.stp_enabled = (br->stp_enabled != BR_NO_STP); b.ageing_time = jiffies_to_clock_t(br->ageing_time); b.hello_timer_value = br_timer_value(&br->hello_timer); b.tcn_timer_value = br_timer_value(&br->tcn_timer); @@ -251,7 +252,7 @@ static int old_dev_ioctl(struct net_devi if (!capable(CAP_NET_ADMIN)) return -EPERM; - br->stp_enabled = args[1]?1:0; + br_stp_set_enabled(br, args[1]); return 0; case BRCTL_SET_BRIDGE_PRIORITY: diff -puN net/bridge/br_netfilter.c~git-net net/bridge/br_netfilter.c --- a/net/bridge/br_netfilter.c~git-net +++ a/net/bridge/br_netfilter.c @@ -29,6 +29,8 @@ #include #include #include +#include +#include #include #include #include @@ -48,8 +50,8 @@ #define skb_origaddr(skb) (((struct bridge_skb_cb *) \ (skb->nf_bridge->data))->daddr.ipv4) -#define store_orig_dstaddr(skb) (skb_origaddr(skb) = (skb)->nh.iph->daddr) -#define dnat_took_place(skb) (skb_origaddr(skb) != (skb)->nh.iph->daddr) +#define store_orig_dstaddr(skb) (skb_origaddr(skb) = ip_hdr(skb)->daddr) +#define dnat_took_place(skb) (skb_origaddr(skb) != ip_hdr(skb)->daddr) #ifdef CONFIG_SYSCTL static struct ctl_table_header *brnf_sysctl_header; @@ -57,8 +59,10 @@ static int brnf_call_iptables __read_mos static int brnf_call_ip6tables __read_mostly = 1; static int brnf_call_arptables __read_mostly = 1; static int brnf_filter_vlan_tagged __read_mostly = 1; +static int brnf_filter_pppoe_tagged __read_mostly = 1; #else #define brnf_filter_vlan_tagged 1 +#define brnf_filter_pppoe_tagged 1 #endif static inline __be16 vlan_proto(const struct sk_buff *skb) @@ -81,6 +85,22 @@ static inline __be16 vlan_proto(const st vlan_proto(skb) == htons(ETH_P_ARP) && \ brnf_filter_vlan_tagged) +static inline __be16 pppoe_proto(const struct sk_buff *skb) +{ + return *((__be16 *)(skb_mac_header(skb) + ETH_HLEN + + sizeof(struct pppoe_hdr))); +} + +#define IS_PPPOE_IP(skb) \ + (skb->protocol == htons(ETH_P_PPP_SES) && \ + pppoe_proto(skb) == htons(PPP_IP) && \ + brnf_filter_pppoe_tagged) + +#define IS_PPPOE_IPV6(skb) \ + (skb->protocol == htons(ETH_P_PPP_SES) && \ + pppoe_proto(skb) == htons(PPP_IPV6) && \ + brnf_filter_pppoe_tagged) + /* We need these fake structures to make netfilter happy -- * lots of places assume that skb->dst != NULL, which isn't * all that unreasonable. @@ -128,8 +148,11 @@ static inline void nf_bridge_save_header if (skb->protocol == htons(ETH_P_8021Q)) header_size += VLAN_HLEN; + else if (skb->protocol == htons(ETH_P_PPP_SES)) + header_size += PPPOE_SES_HLEN; - memcpy(skb->nf_bridge->data, skb->data - header_size, header_size); + skb_copy_from_linear_data_offset(skb, -header_size, + skb->nf_bridge->data, header_size); } /* @@ -143,15 +166,20 @@ int nf_bridge_copy_header(struct sk_buff if (skb->protocol == htons(ETH_P_8021Q)) header_size += VLAN_HLEN; + else if (skb->protocol == htons(ETH_P_PPP_SES)) + header_size += PPPOE_SES_HLEN; err = skb_cow(skb, header_size); if (err) return err; - memcpy(skb->data - header_size, skb->nf_bridge->data, header_size); + skb_copy_to_linear_data_offset(skb, -header_size, + skb->nf_bridge->data, header_size); if (skb->protocol == htons(ETH_P_8021Q)) __skb_push(skb, VLAN_HLEN); + else if (skb->protocol == htons(ETH_P_PPP_SES)) + __skb_push(skb, PPPOE_SES_HLEN); return 0; } @@ -174,7 +202,10 @@ static int br_nf_pre_routing_finish_ipv6 skb->dev = nf_bridge->physindev; if (skb->protocol == htons(ETH_P_8021Q)) { skb_push(skb, VLAN_HLEN); - skb->nh.raw -= VLAN_HLEN; + skb->network_header -= VLAN_HLEN; + } else if (skb->protocol == htons(ETH_P_PPP_SES)) { + skb_push(skb, PPPOE_SES_HLEN); + skb->network_header -= PPPOE_SES_HLEN; } NF_HOOK_THRESH(PF_BRIDGE, NF_BR_PRE_ROUTING, skb, skb->dev, NULL, br_handle_frame_finish, 1); @@ -255,7 +286,10 @@ static int br_nf_pre_routing_finish_brid else { if (skb->protocol == htons(ETH_P_8021Q)) { skb_pull(skb, VLAN_HLEN); - skb->nh.raw += VLAN_HLEN; + skb->network_header += VLAN_HLEN; + } else if (skb->protocol == htons(ETH_P_PPP_SES)) { + skb_pull(skb, PPPOE_SES_HLEN); + skb->network_header += PPPOE_SES_HLEN; } skb->dst->output(skb); } @@ -265,7 +299,7 @@ static int br_nf_pre_routing_finish_brid static int br_nf_pre_routing_finish(struct sk_buff *skb) { struct net_device *dev = skb->dev; - struct iphdr *iph = skb->nh.iph; + struct iphdr *iph = ip_hdr(skb); struct nf_bridge_info *nf_bridge = skb->nf_bridge; int err; @@ -325,7 +359,11 @@ bridged_dnat: if (skb->protocol == htons(ETH_P_8021Q)) { skb_push(skb, VLAN_HLEN); - skb->nh.raw -= VLAN_HLEN; + skb->network_header -= VLAN_HLEN; + } else if(skb->protocol == + htons(ETH_P_PPP_SES)) { + skb_push(skb, PPPOE_SES_HLEN); + skb->network_header -= PPPOE_SES_HLEN; } NF_HOOK_THRESH(PF_BRIDGE, NF_BR_PRE_ROUTING, skb, skb->dev, NULL, @@ -344,7 +382,10 @@ bridged_dnat: skb->dev = nf_bridge->physindev; if (skb->protocol == htons(ETH_P_8021Q)) { skb_push(skb, VLAN_HLEN); - skb->nh.raw -= VLAN_HLEN; + skb->network_header -= VLAN_HLEN; + } else if (skb->protocol == htons(ETH_P_PPP_SES)) { + skb_push(skb, PPPOE_SES_HLEN); + skb->network_header -= PPPOE_SES_HLEN; } NF_HOOK_THRESH(PF_BRIDGE, NF_BR_PRE_ROUTING, skb, skb->dev, NULL, br_handle_frame_finish, 1); @@ -372,9 +413,10 @@ static struct net_device *setup_pre_rout /* We only check the length. A bridge shouldn't do any hop-by-hop stuff anyway */ static int check_hbh_len(struct sk_buff *skb) { - unsigned char *raw = (u8 *) (skb->nh.ipv6h + 1); + unsigned char *raw = (u8 *)(ipv6_hdr(skb) + 1); u32 pkt_len; - int off = raw - skb->nh.raw; + const unsigned char *nh = skb_network_header(skb); + int off = raw - nh; int len = (raw[1] + 1) << 3; if ((raw + len) - skb->data > skb_headlen(skb)) @@ -384,9 +426,9 @@ static int check_hbh_len(struct sk_buff len -= 2; while (len > 0) { - int optlen = skb->nh.raw[off + 1] + 2; + int optlen = nh[off + 1] + 2; - switch (skb->nh.raw[off]) { + switch (nh[off]) { case IPV6_TLV_PAD0: optlen = 1; break; @@ -395,17 +437,18 @@ static int check_hbh_len(struct sk_buff break; case IPV6_TLV_JUMBO: - if (skb->nh.raw[off + 1] != 4 || (off & 3) != 2) + if (nh[off + 1] != 4 || (off & 3) != 2) goto bad; - pkt_len = ntohl(*(__be32 *) (skb->nh.raw + off + 2)); + pkt_len = ntohl(*(__be32 *) (nh + off + 2)); if (pkt_len <= IPV6_MAXPLEN || - skb->nh.ipv6h->payload_len) + ipv6_hdr(skb)->payload_len) goto bad; if (pkt_len > skb->len - sizeof(struct ipv6hdr)) goto bad; if (pskb_trim_rcsum(skb, pkt_len + sizeof(struct ipv6hdr))) goto bad; + nh = skb_network_header(skb); break; default: if (optlen > len) @@ -439,7 +482,7 @@ static unsigned int br_nf_pre_routing_ip if (!pskb_may_pull(skb, sizeof(struct ipv6hdr))) goto inhdr_error; - hdr = skb->nh.ipv6h; + hdr = ipv6_hdr(skb); if (hdr->version != 6) goto inhdr_error; @@ -485,7 +528,8 @@ static unsigned int br_nf_pre_routing(un __u32 len; struct sk_buff *skb = *pskb; - if (skb->protocol == htons(ETH_P_IPV6) || IS_VLAN_IPV6(skb)) { + if (skb->protocol == htons(ETH_P_IPV6) || IS_VLAN_IPV6(skb) || + IS_PPPOE_IPV6(skb)) { #ifdef CONFIG_SYSCTL if (!brnf_call_ip6tables) return NF_ACCEPT; @@ -495,7 +539,10 @@ static unsigned int br_nf_pre_routing(un if (skb->protocol == htons(ETH_P_8021Q)) { skb_pull_rcsum(skb, VLAN_HLEN); - skb->nh.raw += VLAN_HLEN; + skb->network_header += VLAN_HLEN; + } else if (skb->protocol == htons(ETH_P_PPP_SES)) { + skb_pull_rcsum(skb, PPPOE_SES_HLEN); + skb->network_header += PPPOE_SES_HLEN; } return br_nf_pre_routing_ipv6(hook, skb, in, out, okfn); } @@ -504,7 +551,8 @@ static unsigned int br_nf_pre_routing(un return NF_ACCEPT; #endif - if (skb->protocol != htons(ETH_P_IP) && !IS_VLAN_IP(skb)) + if (skb->protocol != htons(ETH_P_IP) && !IS_VLAN_IP(skb) && + !IS_PPPOE_IP(skb)) return NF_ACCEPT; if ((skb = skb_share_check(*pskb, GFP_ATOMIC)) == NULL) @@ -512,20 +560,23 @@ static unsigned int br_nf_pre_routing(un if (skb->protocol == htons(ETH_P_8021Q)) { skb_pull_rcsum(skb, VLAN_HLEN); - skb->nh.raw += VLAN_HLEN; + skb->network_header += VLAN_HLEN; + } else if (skb->protocol == htons(ETH_P_PPP_SES)) { + skb_pull_rcsum(skb, PPPOE_SES_HLEN); + skb->network_header += PPPOE_SES_HLEN; } if (!pskb_may_pull(skb, sizeof(struct iphdr))) goto inhdr_error; - iph = skb->nh.iph; + iph = ip_hdr(skb); if (iph->ihl < 5 || iph->version != 4) goto inhdr_error; if (!pskb_may_pull(skb, 4 * iph->ihl)) goto inhdr_error; - iph = skb->nh.iph; + iph = ip_hdr(skb); if (ip_fast_csum((__u8 *) iph, iph->ihl) != 0) goto inhdr_error; @@ -593,7 +644,10 @@ static int br_nf_forward_finish(struct s } if (skb->protocol == htons(ETH_P_8021Q)) { skb_push(skb, VLAN_HLEN); - skb->nh.raw -= VLAN_HLEN; + skb->network_header -= VLAN_HLEN; + } else if (skb->protocol == htons(ETH_P_PPP_SES)) { + skb_push(skb, PPPOE_SES_HLEN); + skb->network_header -= PPPOE_SES_HLEN; } NF_HOOK_THRESH(PF_BRIDGE, NF_BR_FORWARD, skb, in, skb->dev, br_forward_finish, 1); @@ -622,14 +676,18 @@ static unsigned int br_nf_forward_ip(uns if (!parent) return NF_DROP; - if (skb->protocol == htons(ETH_P_IP) || IS_VLAN_IP(skb)) + if (skb->protocol == htons(ETH_P_IP) || IS_VLAN_IP(skb) || + IS_PPPOE_IP(skb)) pf = PF_INET; else pf = PF_INET6; if (skb->protocol == htons(ETH_P_8021Q)) { skb_pull(*pskb, VLAN_HLEN); - (*pskb)->nh.raw += VLAN_HLEN; + (*pskb)->network_header += VLAN_HLEN; + } else if (skb->protocol == htons(ETH_P_PPP_SES)) { + skb_pull(*pskb, PPPOE_SES_HLEN); + (*pskb)->network_header += PPPOE_SES_HLEN; } nf_bridge = skb->nf_bridge; @@ -665,13 +723,13 @@ static unsigned int br_nf_forward_arp(un if (!IS_VLAN_ARP(skb)) return NF_ACCEPT; skb_pull(*pskb, VLAN_HLEN); - (*pskb)->nh.raw += VLAN_HLEN; + (*pskb)->network_header += VLAN_HLEN; } - if (skb->nh.arph->ar_pln != 4) { + if (arp_hdr(skb)->ar_pln != 4) { if (IS_VLAN_ARP(skb)) { skb_push(*pskb, VLAN_HLEN); - (*pskb)->nh.raw -= VLAN_HLEN; + (*pskb)->network_header -= VLAN_HLEN; } return NF_ACCEPT; } @@ -721,7 +779,10 @@ static unsigned int br_nf_local_out(unsi } if (skb->protocol == htons(ETH_P_8021Q)) { skb_push(skb, VLAN_HLEN); - skb->nh.raw -= VLAN_HLEN; + skb->network_header -= VLAN_HLEN; + } else if (skb->protocol == htons(ETH_P_PPP_SES)) { + skb_push(skb, PPPOE_SES_HLEN); + skb->network_header -= PPPOE_SES_HLEN; } NF_HOOK(PF_BRIDGE, NF_BR_FORWARD, skb, realindev, skb->dev, @@ -753,7 +814,8 @@ static unsigned int br_nf_post_routing(u #ifdef CONFIG_NETFILTER_DEBUG /* Be very paranoid. This probably won't happen anymore, but let's * keep the check just to be sure... */ - if (skb->mac.raw < skb->head || skb->mac.raw + ETH_HLEN > skb->data) { + if (skb_mac_header(skb) < skb->head || + skb_mac_header(skb) + ETH_HLEN > skb->data) { printk(KERN_CRIT "br_netfilter: Argh!! br_nf_post_routing: " "bad mac.raw pointer.\n"); goto print_error; @@ -766,7 +828,8 @@ static unsigned int br_nf_post_routing(u if (!realoutdev) return NF_DROP; - if (skb->protocol == htons(ETH_P_IP) || IS_VLAN_IP(skb)) + if (skb->protocol == htons(ETH_P_IP) || IS_VLAN_IP(skb) || + IS_PPPOE_IP(skb)) pf = PF_INET; else pf = PF_INET6; @@ -787,7 +850,10 @@ static unsigned int br_nf_post_routing(u if (skb->protocol == htons(ETH_P_8021Q)) { skb_pull(skb, VLAN_HLEN); - skb->nh.raw += VLAN_HLEN; + skb->network_header += VLAN_HLEN; + } else if (skb->protocol == htons(ETH_P_PPP_SES)) { + skb_pull(skb, PPPOE_SES_HLEN); + skb->network_header += PPPOE_SES_HLEN; } nf_bridge_save_header(skb); @@ -808,7 +874,7 @@ print_error: if (realoutdev) printk("[%s]", realoutdev->name); } - printk(" head:%p, raw:%p, data:%p\n", skb->head, skb->mac.raw, + printk(" head:%p, raw:%p, data:%p\n", skb->head, skb_mac_header(skb), skb->data); dump_stack(); return NF_ACCEPT; @@ -925,6 +991,14 @@ static ctl_table brnf_table[] = { .mode = 0644, .proc_handler = &brnf_sysctl_call_tables, }, + { + .ctl_name = NET_BRIDGE_NF_FILTER_PPPOE_TAGGED, + .procname = "bridge-nf-filter-pppoe-tagged", + .data = &brnf_filter_pppoe_tagged, + .maxlen = sizeof(int), + .mode = 0644, + .proc_handler = &brnf_sysctl_call_tables, + }, { .ctl_name = 0 } }; diff -puN net/bridge/br_netlink.c~git-net net/bridge/br_netlink.c --- a/net/bridge/br_netlink.c~git-net +++ a/net/bridge/br_netlink.c @@ -11,8 +11,7 @@ */ #include -#include -#include +#include #include "br_private.h" static inline size_t br_nlmsg_size(void) @@ -110,7 +109,6 @@ static int br_dump_ifinfo(struct sk_buff struct net_device *dev; int idx; - read_lock(&dev_base_lock); for (dev = dev_base, idx = 0; dev; dev = dev->next) { /* not a bridge port */ if (dev->br_port == NULL || idx < cb->args[0]) @@ -123,7 +121,6 @@ static int br_dump_ifinfo(struct sk_buff skip: ++idx; } - read_unlock(&dev_base_lock); cb->args[0] = idx; @@ -166,7 +163,7 @@ static int br_rtm_setlink(struct sk_buff return -EINVAL; /* if kernel STP is running, don't allow changes */ - if (p->br->stp_enabled) + if (p->br->stp_enabled == BR_KERNEL_STP) return -EBUSY; if (!netif_running(dev) || @@ -179,18 +176,19 @@ static int br_rtm_setlink(struct sk_buff } -static struct rtnetlink_link bridge_rtnetlink_table[RTM_NR_MSGTYPES] = { - [RTM_GETLINK - RTM_BASE] = { .dumpit = br_dump_ifinfo, }, - [RTM_SETLINK - RTM_BASE] = { .doit = br_rtm_setlink, }, -}; - -void __init br_netlink_init(void) +int __init br_netlink_init(void) { - rtnetlink_links[PF_BRIDGE] = bridge_rtnetlink_table; + if (__rtnl_register(PF_BRIDGE, RTM_GETLINK, NULL, br_dump_ifinfo)) + return -ENOBUFS; + + /* Only the first call to __rtnl_register can fail */ + __rtnl_register(PF_BRIDGE, RTM_SETLINK, br_rtm_setlink, NULL); + + return 0; } void __exit br_netlink_fini(void) { - rtnetlink_links[PF_BRIDGE] = NULL; + rtnl_unregister_all(PF_BRIDGE); } diff -puN net/bridge/br_notify.c~git-net net/bridge/br_notify.c --- a/net/bridge/br_notify.c~git-net +++ a/net/bridge/br_notify.c @@ -50,7 +50,6 @@ static int br_device_event(struct notifi case NETDEV_CHANGEADDR: spin_lock_bh(&br->lock); br_fdb_changeaddr(p, dev->dev_addr); - br_ifinfo_notify(RTM_NEWLINK, p); br_stp_recalculate_bridge_id(br); spin_unlock_bh(&br->lock); break; @@ -74,10 +73,11 @@ static int br_device_event(struct notifi break; case NETDEV_UP: - spin_lock_bh(&br->lock); - if (netif_carrier_ok(dev) && (br->dev->flags & IFF_UP)) + if (netif_carrier_ok(dev) && (br->dev->flags & IFF_UP)) { + spin_lock_bh(&br->lock); br_stp_enable_port(p); - spin_unlock_bh(&br->lock); + spin_unlock_bh(&br->lock); + } break; case NETDEV_UNREGISTER: @@ -85,5 +85,10 @@ static int br_device_event(struct notifi break; } + /* Events that may cause spanning tree to refresh */ + if (event == NETDEV_CHANGEADDR || event == NETDEV_UP || + event == NETDEV_CHANGE || event == NETDEV_DOWN) + br_ifinfo_notify(RTM_NEWLINK, p); + return NOTIFY_DONE; } diff -puN net/bridge/br_private.h~git-net net/bridge/br_private.h --- a/net/bridge/br_private.h~git-net +++ a/net/bridge/br_private.h @@ -26,7 +26,10 @@ #define BR_PORT_BITS 10 #define BR_MAX_PORTS (1<bridge_id, &br->designated_root, 8); } - /* br_device.c */ extern void br_dev_setup(struct net_device *dev); extern int br_dev_xmit(struct sk_buff *skb, struct net_device *dev); /* br_fdb.c */ -extern void br_fdb_init(void); +extern int br_fdb_init(void); extern void br_fdb_fini(void); +extern void br_fdb_flush(struct net_bridge *br); extern void br_fdb_changeaddr(struct net_bridge_port *p, const unsigned char *newaddr); extern void br_fdb_cleanup(unsigned long arg); @@ -182,7 +191,8 @@ extern void br_features_recompute(struct /* br_input.c */ extern int br_handle_frame_finish(struct sk_buff *skb); -extern int br_handle_frame(struct net_bridge_port *p, struct sk_buff **pskb); +extern struct sk_buff *br_handle_frame(struct net_bridge_port *p, + struct sk_buff *skb); /* br_ioctl.c */ extern int br_dev_ioctl(struct net_device *dev, struct ifreq *rq, int cmd); @@ -207,6 +217,7 @@ extern void br_become_designated_port(st /* br_stp_if.c */ extern void br_stp_enable_bridge(struct net_bridge *br); extern void br_stp_disable_bridge(struct net_bridge *br); +extern void br_stp_set_enabled(struct net_bridge *br, unsigned long val); extern void br_stp_enable_port(struct net_bridge_port *p); extern void br_stp_disable_port(struct net_bridge_port *p); extern void br_stp_recalculate_bridge_id(struct net_bridge *br); @@ -235,7 +246,7 @@ extern void (*br_fdb_put_hook)(struct ne /* br_netlink.c */ -extern void br_netlink_init(void); +extern int br_netlink_init(void); extern void br_netlink_fini(void); extern void br_ifinfo_notify(int event, struct net_bridge_port *port); diff -puN net/bridge/br_stp.c~git-net net/bridge/br_stp.c --- a/net/bridge/br_stp.c~git-net +++ a/net/bridge/br_stp.c @@ -370,11 +370,11 @@ static void br_make_blocking(struct net_ static void br_make_forwarding(struct net_bridge_port *p) { if (p->state == BR_STATE_BLOCKING) { - if (p->br->stp_enabled) { + if (p->br->stp_enabled == BR_KERNEL_STP) p->state = BR_STATE_LISTENING; - } else { + else p->state = BR_STATE_LEARNING; - } + br_log_state(p); mod_timer(&p->forward_delay_timer, jiffies + p->br->forward_delay); } } @@ -384,6 +384,10 @@ void br_port_state_selection(struct net_ { struct net_bridge_port *p; + /* Don't change port states if userspace is handling STP */ + if (br->stp_enabled == BR_USER_STP) + return; + list_for_each_entry(p, &br->port_list, list) { if (p->state != BR_STATE_DISABLED) { if (p->port_no == br->root_port) { diff -puN net/bridge/br_stp_bpdu.c~git-net net/bridge/br_stp_bpdu.c --- a/net/bridge/br_stp_bpdu.c~git-net +++ a/net/bridge/br_stp_bpdu.c @@ -33,9 +33,6 @@ static void br_send_bpdu(struct net_brid { struct sk_buff *skb; - if (!p->br->stp_enabled) - return; - skb = dev_alloc_skb(length+LLC_RESERVE); if (!skb) return; @@ -75,6 +72,9 @@ void br_send_config_bpdu(struct net_brid { unsigned char buf[35]; + if (p->br->stp_enabled != BR_KERNEL_STP) + return; + buf[0] = 0; buf[1] = 0; buf[2] = 0; @@ -117,6 +117,9 @@ void br_send_tcn_bpdu(struct net_bridge_ { unsigned char buf[4]; + if (p->br->stp_enabled != BR_KERNEL_STP) + return; + buf[0] = 0; buf[1] = 0; buf[2] = 0; @@ -157,9 +160,13 @@ int br_stp_rcv(struct sk_buff *skb, stru br = p->br; spin_lock(&br->lock); - if (p->state == BR_STATE_DISABLED - || !br->stp_enabled - || !(br->dev->flags & IFF_UP)) + if (br->stp_enabled != BR_KERNEL_STP) + goto out; + + if (!(br->dev->flags & IFF_UP)) + goto out; + + if (p->state == BR_STATE_DISABLED) goto out; if (compare_ether_addr(dest, br->group_addr) != 0) diff -puN net/bridge/br_stp_if.c~git-net net/bridge/br_stp_if.c --- a/net/bridge/br_stp_if.c~git-net +++ a/net/bridge/br_stp_if.c @@ -87,7 +87,6 @@ void br_stp_disable_bridge(struct net_br void br_stp_enable_port(struct net_bridge_port *p) { br_init_port(p); - br_ifinfo_notify(RTM_NEWLINK, p); br_port_state_selection(p->br); } @@ -101,8 +100,6 @@ void br_stp_disable_port(struct net_brid printk(KERN_INFO "%s: port %i(%s) entering %s state\n", br->dev->name, p->port_no, p->dev->name, "disabled"); - br_ifinfo_notify(RTM_DELLINK, p); - wasroot = br_is_root_bridge(br); br_become_designated_port(p); p->state = BR_STATE_DISABLED; @@ -123,6 +120,62 @@ void br_stp_disable_port(struct net_brid br_become_root_bridge(br); } +static void br_stp_start(struct net_bridge *br) +{ + int r; + char *argv[] = { BR_STP_PROG, br->dev->name, "start", NULL }; + char *envp[] = { NULL }; + + r = call_usermodehelper(BR_STP_PROG, argv, envp, 1); + if (r == 0) { + br->stp_enabled = BR_USER_STP; + printk(KERN_INFO "%s: userspace STP started\n", br->dev->name); + } else { + br->stp_enabled = BR_KERNEL_STP; + printk(KERN_INFO "%s: starting userspace STP failed, " + "staring kernel STP\n", br->dev->name); + + /* To start timers on any ports left in blocking */ + spin_lock_bh(&br->lock); + br_port_state_selection(br); + spin_unlock_bh(&br->lock); + } +} + +static void br_stp_stop(struct net_bridge *br) +{ + int r; + char *argv[] = { BR_STP_PROG, br->dev->name, "stop", NULL }; + char *envp[] = { NULL }; + + if (br->stp_enabled == BR_USER_STP) { + r = call_usermodehelper(BR_STP_PROG, argv, envp, 1); + printk(KERN_INFO "%s: userspace STP stopped, return code %d\n", + br->dev->name, r); + + + /* To start timers on any ports left in blocking */ + spin_lock_bh(&br->lock); + br_port_state_selection(br); + spin_unlock_bh(&br->lock); + } + + br->stp_enabled = BR_NO_STP; +} + +void br_stp_set_enabled(struct net_bridge *br, unsigned long val) +{ + ASSERT_RTNL(); + + if (val) { + if (br->stp_enabled == BR_NO_STP) + br_stp_start(br); + } else { + if (br->stp_enabled != BR_NO_STP) + br_stp_stop(br); + } +} + /* called under bridge lock */ void br_stp_change_bridge_id(struct net_bridge *br, const unsigned char *addr) { diff -puN net/bridge/br_sysfs_br.c~git-net net/bridge/br_sysfs_br.c --- a/net/bridge/br_sysfs_br.c~git-net +++ a/net/bridge/br_sysfs_br.c @@ -149,7 +149,9 @@ static ssize_t show_stp_state(struct dev static void set_stp_state(struct net_bridge *br, unsigned long val) { - br->stp_enabled = val; + spin_unlock_bh(&br->lock); + br_stp_set_enabled(br, val); + spin_lock_bh(&br->lock); } static ssize_t store_stp_state(struct device *d, @@ -309,6 +311,19 @@ static ssize_t store_group_addr(struct d static DEVICE_ATTR(group_addr, S_IRUGO | S_IWUSR, show_group_addr, store_group_addr); +static ssize_t store_flush(struct device *d, + struct device_attribute *attr, + const char *buf, size_t len) +{ + struct net_bridge *br = to_bridge(d); + + if (!capable(CAP_NET_ADMIN)) + return -EPERM; + + br_fdb_flush(br); + return len; +} +static DEVICE_ATTR(flush, S_IWUSR, NULL, store_flush); static struct attribute *bridge_attrs[] = { &dev_attr_forward_delay.attr, @@ -328,6 +343,7 @@ static struct attribute *bridge_attrs[] &dev_attr_topology_change_timer.attr, &dev_attr_gc_timer.attr, &dev_attr_group_addr.attr, + &dev_attr_flush.attr, NULL }; diff -puN net/bridge/br_sysfs_if.c~git-net net/bridge/br_sysfs_if.c --- a/net/bridge/br_sysfs_if.c~git-net +++ a/net/bridge/br_sysfs_if.c @@ -136,6 +136,13 @@ static ssize_t show_hold_timer(struct ne } static BRPORT_ATTR(hold_timer, S_IRUGO, show_hold_timer, NULL); +static ssize_t store_flush(struct net_bridge_port *p, unsigned long v) +{ + br_fdb_delete_by_port(p->br, p, 0); // Don't delete local entry + return 0; +} +static BRPORT_ATTR(flush, S_IWUSR, NULL, store_flush); + static struct brport_attribute *brport_attrs[] = { &brport_attr_path_cost, &brport_attr_priority, @@ -151,6 +158,7 @@ static struct brport_attribute *brport_a &brport_attr_message_age_timer, &brport_attr_forward_delay_timer, &brport_attr_hold_timer, + &brport_attr_flush, NULL }; diff -puN net/bridge/netfilter/ebt_arp.c~git-net net/bridge/netfilter/ebt_arp.c --- a/net/bridge/netfilter/ebt_arp.c~git-net +++ a/net/bridge/netfilter/ebt_arp.c @@ -35,40 +35,36 @@ static int ebt_filter_arp(const struct s return EBT_NOMATCH; if (info->bitmask & (EBT_ARP_SRC_IP | EBT_ARP_DST_IP)) { - __be32 _addr, *ap; + __be32 saddr, daddr, *sap, *dap; - /* IPv4 addresses are always 4 bytes */ - if (ah->ar_pln != sizeof(__be32)) + if (ah->ar_pln != sizeof(__be32) || ah->ar_pro != htons(ETH_P_IP)) + return EBT_NOMATCH; + sap = skb_header_pointer(skb, sizeof(struct arphdr) + + ah->ar_hln, sizeof(saddr), + &saddr); + if (sap == NULL) + return EBT_NOMATCH; + dap = skb_header_pointer(skb, sizeof(struct arphdr) + + 2*ah->ar_hln+sizeof(saddr), + sizeof(daddr), &daddr); + if (dap == NULL) + return EBT_NOMATCH; + if (info->bitmask & EBT_ARP_SRC_IP && + FWINV(info->saddr != (*sap & info->smsk), EBT_ARP_SRC_IP)) + return EBT_NOMATCH; + if (info->bitmask & EBT_ARP_DST_IP && + FWINV(info->daddr != (*dap & info->dmsk), EBT_ARP_DST_IP)) + return EBT_NOMATCH; + if (info->bitmask & EBT_ARP_GRAT && + FWINV(*dap != *sap, EBT_ARP_GRAT)) return EBT_NOMATCH; - if (info->bitmask & EBT_ARP_SRC_IP) { - ap = skb_header_pointer(skb, sizeof(struct arphdr) + - ah->ar_hln, sizeof(_addr), - &_addr); - if (ap == NULL) - return EBT_NOMATCH; - if (FWINV(info->saddr != (*ap & info->smsk), - EBT_ARP_SRC_IP)) - return EBT_NOMATCH; - } - - if (info->bitmask & EBT_ARP_DST_IP) { - ap = skb_header_pointer(skb, sizeof(struct arphdr) + - 2*ah->ar_hln+sizeof(__be32), - sizeof(_addr), &_addr); - if (ap == NULL) - return EBT_NOMATCH; - if (FWINV(info->daddr != (*ap & info->dmsk), - EBT_ARP_DST_IP)) - return EBT_NOMATCH; - } } if (info->bitmask & (EBT_ARP_SRC_MAC | EBT_ARP_DST_MAC)) { unsigned char _mac[ETH_ALEN], *mp; uint8_t verdict, i; - /* MAC addresses are 6 bytes */ - if (ah->ar_hln != ETH_ALEN) + if (ah->ar_hln != ETH_ALEN || ah->ar_hrd != htons(ARPHRD_ETHER)) return EBT_NOMATCH; if (info->bitmask & EBT_ARP_SRC_MAC) { mp = skb_header_pointer(skb, sizeof(struct arphdr), diff -puN net/bridge/netfilter/ebt_log.c~git-net net/bridge/netfilter/ebt_log.c --- a/net/bridge/netfilter/ebt_log.c~git-net +++ a/net/bridge/netfilter/ebt_log.c @@ -196,14 +196,10 @@ static int __init ebt_log_init(void) ret = ebt_register_watcher(&log); if (ret < 0) return ret; - if (nf_log_register(PF_BRIDGE, &ebt_log_logger) < 0) { - printk(KERN_WARNING "ebt_log: not logging via system console " - "since somebody else already registered for PF_INET\n"); - /* we cannot make module load fail here, since otherwise - * ebtables userspace would abort */ - } - - return 0; + ret = nf_log_register(PF_BRIDGE, &ebt_log_logger); + if (ret < 0 && ret != -EEXIST) + ebt_unregister_watcher(&log); + return ret; } static void __exit ebt_log_fini(void) diff -puN net/bridge/netfilter/ebt_ulog.c~git-net net/bridge/netfilter/ebt_ulog.c --- a/net/bridge/netfilter/ebt_ulog.c~git-net +++ a/net/bridge/netfilter/ebt_ulog.c @@ -130,6 +130,7 @@ static void ebt_ulog_packet(unsigned int unsigned int group = uloginfo->nlgroup; ebt_ulog_buff_t *ub = &ulog_buffers[group]; spinlock_t *lock = &ub->lock; + ktime_t kt; if ((uloginfo->cprange == 0) || (uloginfo->cprange > skb->len + ETH_HLEN)) @@ -164,9 +165,10 @@ static void ebt_ulog_packet(unsigned int /* Fill in the ulog data */ pm->version = EBT_ULOG_VERSION; - do_gettimeofday(&pm->stamp); + kt = ktime_get_real(); + pm->stamp = ktime_to_timeval(kt); if (ub->qlen == 1) - skb_set_timestamp(ub->skb, &pm->stamp); + ub->skb->tstamp = kt; pm->data_len = copy_len; pm->mark = skb->mark; pm->hook = hooknr; @@ -295,14 +297,12 @@ static int __init ebt_ulog_init(void) /* initialize ulog_buffers */ for (i = 0; i < EBT_ULOG_MAXNLGROUPS; i++) { - init_timer(&ulog_buffers[i].timer); - ulog_buffers[i].timer.function = ulog_timer; - ulog_buffers[i].timer.data = i; + setup_timer(&ulog_buffers[i].timer, ulog_timer, i); spin_lock_init(&ulog_buffers[i].lock); } ebtulognl = netlink_kernel_create(NETLINK_NFLOG, EBT_ULOG_MAXNLGROUPS, - NULL, THIS_MODULE); + NULL, NULL, THIS_MODULE); if (!ebtulognl) ret = -ENOMEM; else if ((ret = ebt_register_watcher(&ulog))) diff -puN net/compat.c~git-net net/compat.c --- a/net/compat.c~git-net +++ a/net/compat.c @@ -34,11 +34,11 @@ static inline int iov_from_user_compat_t { int tot_len = 0; - while(niov > 0) { + while (niov > 0) { compat_uptr_t buf; compat_size_t len; - if(get_user(len, &uiov32->iov_len) || + if (get_user(len, &uiov32->iov_len) || get_user(buf, &uiov32->iov_base)) { tot_len = -EFAULT; break; @@ -78,12 +78,12 @@ int verify_compat_iovec(struct msghdr *k { int tot_len; - if(kern_msg->msg_namelen) { - if(mode==VERIFY_READ) { + if (kern_msg->msg_namelen) { + if (mode==VERIFY_READ) { int err = move_addr_to_kernel(kern_msg->msg_name, kern_msg->msg_namelen, kern_address); - if(err < 0) + if (err < 0) return err; } kern_msg->msg_name = kern_address; @@ -93,7 +93,7 @@ int verify_compat_iovec(struct msghdr *k tot_len = iov_from_user_compat_to_kern(kern_iov, (struct compat_iovec __user *)kern_msg->msg_iov, kern_msg->msg_iovlen); - if(tot_len >= 0) + if (tot_len >= 0) kern_msg->msg_iov = kern_iov; return tot_len; @@ -146,8 +146,8 @@ int cmsghdr_from_user_compat_to_kern(str kcmlen = 0; kcmsg_base = kcmsg = (struct cmsghdr *)stackbuf; ucmsg = CMSG_COMPAT_FIRSTHDR(kmsg); - while(ucmsg != NULL) { - if(get_user(ucmlen, &ucmsg->cmsg_len)) + while (ucmsg != NULL) { + if (get_user(ucmlen, &ucmsg->cmsg_len)) return -EFAULT; /* Catch bogons. */ @@ -160,7 +160,7 @@ int cmsghdr_from_user_compat_to_kern(str kcmlen += tmp; ucmsg = cmsg_compat_nxthdr(kmsg, ucmsg, ucmlen); } - if(kcmlen == 0) + if (kcmlen == 0) return -EINVAL; /* The kcmlen holds the 64-bit version of the control length. @@ -176,7 +176,7 @@ int cmsghdr_from_user_compat_to_kern(str /* Now copy them over neatly. */ memset(kcmsg, 0, kcmlen); ucmsg = CMSG_COMPAT_FIRSTHDR(kmsg); - while(ucmsg != NULL) { + while (ucmsg != NULL) { if (__get_user(ucmlen, &ucmsg->cmsg_len)) goto Efault; if (!CMSG_COMPAT_OK(ucmlen, ucmsg, kmsg)) @@ -215,11 +215,12 @@ Efault: int put_cmsg_compat(struct msghdr *kmsg, int level, int type, int len, void *data) { struct compat_timeval ctv; + struct compat_timespec cts; struct compat_cmsghdr __user *cm = (struct compat_cmsghdr __user *) kmsg->msg_control; struct compat_cmsghdr cmhdr; int cmlen; - if(cm == NULL || kmsg->msg_controllen < sizeof(*cm)) { + if (cm == NULL || kmsg->msg_controllen < sizeof(*cm)) { kmsg->msg_flags |= MSG_CTRUNC; return 0; /* XXX: return error? check spec. */ } @@ -229,11 +230,18 @@ int put_cmsg_compat(struct msghdr *kmsg, ctv.tv_sec = tv->tv_sec; ctv.tv_usec = tv->tv_usec; data = &ctv; - len = sizeof(struct compat_timeval); + len = sizeof(ctv); + } + if (level == SOL_SOCKET && type == SO_TIMESTAMPNS) { + struct timespec *ts = (struct timespec *)data; + cts.tv_sec = ts->tv_sec; + cts.tv_nsec = ts->tv_nsec; + data = &cts; + len = sizeof(cts); } cmlen = CMSG_COMPAT_LEN(len); - if(kmsg->msg_controllen < cmlen) { + if (kmsg->msg_controllen < cmlen) { kmsg->msg_flags |= MSG_CTRUNC; cmlen = kmsg->msg_controllen; } @@ -241,9 +249,9 @@ int put_cmsg_compat(struct msghdr *kmsg, cmhdr.cmsg_type = type; cmhdr.cmsg_len = cmlen; - if(copy_to_user(cm, &cmhdr, sizeof cmhdr)) + if (copy_to_user(cm, &cmhdr, sizeof cmhdr)) return -EFAULT; - if(copy_to_user(CMSG_COMPAT_DATA(cm), data, cmlen - sizeof(struct compat_cmsghdr))) + if (copy_to_user(CMSG_COMPAT_DATA(cm), data, cmlen - sizeof(struct compat_cmsghdr))) return -EFAULT; cmlen = CMSG_COMPAT_SPACE(len); kmsg->msg_control += cmlen; @@ -545,20 +553,49 @@ int compat_sock_get_timestamp(struct soc struct compat_timeval __user *ctv = (struct compat_timeval __user*) userstamp; int err = -ENOENT; + struct timeval tv; if (!sock_flag(sk, SOCK_TIMESTAMP)) sock_enable_timestamp(sk); - if (sk->sk_stamp.tv_sec == -1) + tv = ktime_to_timeval(sk->sk_stamp); + if (tv.tv_sec == -1) return err; - if (sk->sk_stamp.tv_sec == 0) - do_gettimeofday(&sk->sk_stamp); - if (put_user(sk->sk_stamp.tv_sec, &ctv->tv_sec) || - put_user(sk->sk_stamp.tv_usec, &ctv->tv_usec)) + if (tv.tv_sec == 0) { + sk->sk_stamp = ktime_get_real(); + tv = ktime_to_timeval(sk->sk_stamp); + } + err = 0; + if (put_user(tv.tv_sec, &ctv->tv_sec) || + put_user(tv.tv_usec, &ctv->tv_usec)) err = -EFAULT; return err; } EXPORT_SYMBOL(compat_sock_get_timestamp); +int compat_sock_get_timestampns(struct sock *sk, struct timespec __user *userstamp) +{ + struct compat_timespec __user *ctv = + (struct compat_timespec __user*) userstamp; + int err = -ENOENT; + struct timespec ts; + + if (!sock_flag(sk, SOCK_TIMESTAMP)) + sock_enable_timestamp(sk); + ts = ktime_to_timespec(sk->sk_stamp); + if (ts.tv_sec == -1) + return err; + if (ts.tv_sec == 0) { + sk->sk_stamp = ktime_get_real(); + ts = ktime_to_timespec(sk->sk_stamp); + } + err = 0; + if (put_user(ts.tv_sec, &ctv->tv_sec) || + put_user(ts.tv_nsec, &ctv->tv_nsec)) + err = -EFAULT; + return err; +} +EXPORT_SYMBOL(compat_sock_get_timestampns); + asmlinkage long compat_sys_getsockopt(int fd, int level, int optname, char __user *optval, int __user *optlen) { @@ -617,7 +654,7 @@ asmlinkage long compat_sys_socketcall(in a0 = a[0]; a1 = a[1]; - switch(call) { + switch (call) { case SYS_SOCKET: ret = sys_socket(a0, a1, a[2]); break; diff -puN net/core/datagram.c~git-net net/core/datagram.c --- a/net/core/datagram.c~git-net +++ a/net/core/datagram.c @@ -411,11 +411,11 @@ fault: return -EFAULT; } -__sum16 __skb_checksum_complete(struct sk_buff *skb) +__sum16 __skb_checksum_complete_head(struct sk_buff *skb, int len) { __sum16 sum; - sum = csum_fold(skb_checksum(skb, 0, skb->len, skb->csum)); + sum = csum_fold(skb_checksum(skb, 0, len, skb->csum)); if (likely(!sum)) { if (unlikely(skb->ip_summed == CHECKSUM_COMPLETE)) netdev_rx_csum_fault(skb->dev); @@ -423,6 +423,12 @@ __sum16 __skb_checksum_complete(struct s } return sum; } +EXPORT_SYMBOL(__skb_checksum_complete_head); + +__sum16 __skb_checksum_complete(struct sk_buff *skb) +{ + return __skb_checksum_complete_head(skb, skb->len); +} EXPORT_SYMBOL(__skb_checksum_complete); /** diff -puN net/core/dev.c~git-net net/core/dev.c --- a/net/core/dev.c~git-net +++ a/net/core/dev.c @@ -146,8 +146,8 @@ */ static DEFINE_SPINLOCK(ptype_lock); -static struct list_head ptype_base[16]; /* 16 way hashed list */ -static struct list_head ptype_all; /* Taps */ +static struct list_head ptype_base[16] __read_mostly; /* 16 way hashed list */ +static struct list_head ptype_all __read_mostly; /* Taps */ #ifdef CONFIG_NET_DMA static struct dma_client *net_dma_client; @@ -226,12 +226,6 @@ extern void netdev_unregister_sysfs(stru *******************************************************************************/ /* - * For efficiency - */ - -static int netdev_nit; - -/* * Add a protocol ID to the list. Now that the input handler is * smarter we can dispense with all the messy stuff that used to be * here. @@ -265,10 +259,9 @@ void dev_add_pack(struct packet_type *pt int hash; spin_lock_bh(&ptype_lock); - if (pt->type == htons(ETH_P_ALL)) { - netdev_nit++; + if (pt->type == htons(ETH_P_ALL)) list_add_rcu(&pt->list, &ptype_all); - } else { + else { hash = ntohs(pt->type) & 15; list_add_rcu(&pt->list, &ptype_base[hash]); } @@ -295,10 +288,9 @@ void __dev_remove_pack(struct packet_typ spin_lock_bh(&ptype_lock); - if (pt->type == htons(ETH_P_ALL)) { - netdev_nit--; + if (pt->type == htons(ETH_P_ALL)) head = &ptype_all; - } else + else head = &ptype_base[ntohs(pt->type) & 15]; list_for_each_entry(pt1, head, list) { @@ -817,7 +809,6 @@ static int default_rebuild_header(struct return 1; } - /** * dev_open - prepare an interface for use. * @dev: device to open @@ -1031,23 +1022,12 @@ void net_disable_timestamp(void) atomic_dec(&netstamp_needed); } -void __net_timestamp(struct sk_buff *skb) -{ - struct timeval tv; - - do_gettimeofday(&tv); - skb_set_timestamp(skb, &tv); -} -EXPORT_SYMBOL(__net_timestamp); - static inline void net_timestamp(struct sk_buff *skb) { if (atomic_read(&netstamp_needed)) __net_timestamp(skb); - else { - skb->tstamp.off_sec = 0; - skb->tstamp.off_usec = 0; - } + else + skb->tstamp.tv64 = 0; } /* @@ -1077,18 +1057,18 @@ static void dev_queue_xmit_nit(struct sk set by sender, so that the second statement is just protection against buggy protocols. */ - skb2->mac.raw = skb2->data; + skb_reset_mac_header(skb2); - if (skb2->nh.raw < skb2->data || - skb2->nh.raw > skb2->tail) { + if (skb_network_header(skb2) < skb2->data || + skb2->network_header > skb2->tail) { if (net_ratelimit()) printk(KERN_CRIT "protocol %04x is " "buggy, dev %s\n", skb2->protocol, dev->name); - skb2->nh.raw = skb2->data; + skb_reset_network_header(skb2); } - skb2->h.raw = skb2->nh.raw; + skb2->transport_header = skb2->network_header; skb2->pkt_type = PACKET_OUTGOING; ptype->func(skb2, skb->dev, ptype, skb->dev); } @@ -1167,7 +1147,7 @@ EXPORT_SYMBOL(netif_device_attach); int skb_checksum_help(struct sk_buff *skb) { __wsum csum; - int ret = 0, offset = skb->h.raw - skb->data; + int ret = 0, offset; if (skb->ip_summed == CHECKSUM_COMPLETE) goto out_set_summed; @@ -1183,15 +1163,16 @@ int skb_checksum_help(struct sk_buff *sk goto out; } + offset = skb->csum_start - skb_headroom(skb); BUG_ON(offset > (int)skb->len); csum = skb_checksum(skb, offset, skb->len-offset, 0); - offset = skb->tail - skb->h.raw; + offset = skb_headlen(skb) - offset; BUG_ON(offset <= 0); BUG_ON(skb->csum_offset + 2 > offset); - *(__sum16*)(skb->h.raw + skb->csum_offset) = csum_fold(csum); - + *(__sum16 *)(skb->head + skb->csum_start + skb->csum_offset) = + csum_fold(csum); out_set_summed: skb->ip_summed = CHECKSUM_NONE; out: @@ -1217,8 +1198,8 @@ struct sk_buff *skb_gso_segment(struct s BUG_ON(skb_shinfo(skb)->frag_list); - skb->mac.raw = skb->data; - skb->mac_len = skb->nh.raw - skb->data; + skb_reset_mac_header(skb); + skb->mac_len = skb->network_header - skb->mac_header; __skb_pull(skb, skb->mac_len); if (unlikely(skb->ip_summed != CHECKSUM_PARTIAL)) { @@ -1235,7 +1216,8 @@ struct sk_buff *skb_gso_segment(struct s segs = ERR_PTR(err); if (err || skb_gso_ok(skb, features)) break; - __skb_push(skb, skb->data - skb->nh.raw); + __skb_push(skb, (skb->data - + skb_network_header(skb))); } segs = ptype->gso_segment(skb, features); break; @@ -1243,7 +1225,7 @@ struct sk_buff *skb_gso_segment(struct s } rcu_read_unlock(); - __skb_push(skb, skb->data - skb->mac.raw); + __skb_push(skb, skb->data - skb_mac_header(skb)); return segs; } @@ -1340,7 +1322,7 @@ static int dev_gso_segment(struct sk_buf int dev_hard_start_xmit(struct sk_buff *skb, struct net_device *dev) { if (likely(!skb->next)) { - if (netdev_nit) + if (!list_empty(&ptype_all)) dev_queue_xmit_nit(skb, dev); if (netif_needs_gso(dev, skb)) { @@ -1442,12 +1424,16 @@ int dev_queue_xmit(struct sk_buff *skb) /* If packet is not checksummed and device does not support * checksumming for this protocol, complete checksumming here. */ - if (skb->ip_summed == CHECKSUM_PARTIAL && - (!(dev->features & NETIF_F_GEN_CSUM) && - (!(dev->features & NETIF_F_IP_CSUM) || - skb->protocol != htons(ETH_P_IP)))) - if (skb_checksum_help(skb)) - goto out_kfree_skb; + if (skb->ip_summed == CHECKSUM_PARTIAL) { + skb_set_transport_header(skb, skb->csum_start - + skb_headroom(skb)); + + if (!(dev->features & NETIF_F_GEN_CSUM) && + (!(dev->features & NETIF_F_IP_CSUM) || + skb->protocol != htons(ETH_P_IP))) + if (skb_checksum_help(skb)) + goto out_kfree_skb; + } gso: spin_lock_prefetch(&dev->queue_lock); @@ -1543,9 +1529,9 @@ out: Receiver routines =======================================================================*/ -int netdev_max_backlog = 1000; -int netdev_budget = 300; -int weight_p = 64; /* old backlog weight */ +int netdev_max_backlog __read_mostly = 1000; +int netdev_budget __read_mostly = 300; +int weight_p __read_mostly = 64; /* old backlog weight */ DEFINE_PER_CPU(struct netif_rx_stats, netdev_rx_stat) = { 0, }; @@ -1577,7 +1563,7 @@ int netif_rx(struct sk_buff *skb) if (netpoll_rx(skb)) return NET_RX_DROP; - if (!skb->tstamp.off_sec) + if (!skb->tstamp.tv64) net_timestamp(skb); /* @@ -1684,40 +1670,46 @@ static void net_tx_action(struct softirq } } -static __inline__ int deliver_skb(struct sk_buff *skb, - struct packet_type *pt_prev, - struct net_device *orig_dev) +static inline int deliver_skb(struct sk_buff *skb, + struct packet_type *pt_prev, + struct net_device *orig_dev) { atomic_inc(&skb->users); return pt_prev->func(skb, skb->dev, pt_prev, orig_dev); } #if defined(CONFIG_BRIDGE) || defined (CONFIG_BRIDGE_MODULE) -int (*br_handle_frame_hook)(struct net_bridge_port *p, struct sk_buff **pskb); +/* These hooks defined here for ATM */ struct net_bridge; struct net_bridge_fdb_entry *(*br_fdb_get_hook)(struct net_bridge *br, unsigned char *addr); -void (*br_fdb_put_hook)(struct net_bridge_fdb_entry *ent); +void (*br_fdb_put_hook)(struct net_bridge_fdb_entry *ent) __read_mostly; -static __inline__ int handle_bridge(struct sk_buff **pskb, - struct packet_type **pt_prev, int *ret, - struct net_device *orig_dev) +/* + * If bridge module is loaded call bridging hook. + * returns NULL if packet was consumed. + */ +struct sk_buff *(*br_handle_frame_hook)(struct net_bridge_port *p, + struct sk_buff *skb) __read_mostly; +static inline struct sk_buff *handle_bridge(struct sk_buff *skb, + struct packet_type **pt_prev, int *ret, + struct net_device *orig_dev) { struct net_bridge_port *port; - if ((*pskb)->pkt_type == PACKET_LOOPBACK || - (port = rcu_dereference((*pskb)->dev->br_port)) == NULL) - return 0; + if (skb->pkt_type == PACKET_LOOPBACK || + (port = rcu_dereference(skb->dev->br_port)) == NULL) + return skb; if (*pt_prev) { - *ret = deliver_skb(*pskb, *pt_prev, orig_dev); + *ret = deliver_skb(skb, *pt_prev, orig_dev); *pt_prev = NULL; } - return br_handle_frame_hook(port, pskb); + return br_handle_frame_hook(port, skb); } #else -#define handle_bridge(skb, pt_prev, ret, orig_dev) (0) +#define handle_bridge(skb, pt_prev, ret, orig_dev) (skb) #endif #ifdef CONFIG_NET_CLS_ACT @@ -1747,10 +1739,10 @@ static int ing_filter(struct sk_buff *sk skb->tc_verd = SET_TC_AT(skb->tc_verd,AT_INGRESS); - spin_lock(&dev->queue_lock); + spin_lock(&dev->ingress_lock); if ((q = dev->qdisc_ingress) != NULL) result = q->enqueue(skb, q); - spin_unlock(&dev->queue_lock); + spin_unlock(&dev->ingress_lock); } @@ -1769,7 +1761,7 @@ int netif_receive_skb(struct sk_buff *sk if (skb->dev->poll && netpoll_rx(skb)) return NET_RX_DROP; - if (!skb->tstamp.off_sec) + if (!skb->tstamp.tv64) net_timestamp(skb); if (!skb->iif) @@ -1782,8 +1774,9 @@ int netif_receive_skb(struct sk_buff *sk __get_cpu_var(netdev_rx_stat).total++; - skb->h.raw = skb->nh.raw = skb->data; - skb->mac_len = skb->nh.raw - skb->mac.raw; + skb_reset_network_header(skb); + skb_reset_transport_header(skb); + skb->mac_len = skb->network_header - skb->mac_header; pt_prev = NULL; @@ -1823,7 +1816,8 @@ int netif_receive_skb(struct sk_buff *sk ncls: #endif - if (handle_bridge(&skb, &pt_prev, &ret, orig_dev)) + skb = handle_bridge(skb, &pt_prev, &ret, orig_dev); + if (!skb) goto out; type = skb->protocol; @@ -2076,7 +2070,7 @@ static int dev_ifconf(char __user *arg) * This is invoked by the /proc filesystem handler to display a device * in detail. */ -static __inline__ struct net_device *dev_get_idx(loff_t pos) +static struct net_device *dev_get_idx(loff_t pos) { struct net_device *dev; loff_t i; @@ -2105,9 +2099,9 @@ void dev_seq_stop(struct seq_file *seq, static void dev_seq_printf_stats(struct seq_file *seq, struct net_device *dev) { - if (dev->get_stats) { - struct net_device_stats *stats = dev->get_stats(dev); + struct net_device_stats *stats = dev->get_stats(dev); + if (stats) { seq_printf(seq, "%6s:%8lu %7lu %4lu %4lu %4lu %5lu %10lu %9lu " "%8lu %7lu %4lu %4lu %4lu %5lu %7lu %10lu\n", dev->name, stats->rx_bytes, stats->rx_packets, @@ -2185,7 +2179,7 @@ static int softnet_seq_show(struct seq_f return 0; } -static struct seq_operations dev_seq_ops = { +static const struct seq_operations dev_seq_ops = { .start = dev_seq_start, .next = dev_seq_next, .stop = dev_seq_stop, @@ -2205,7 +2199,7 @@ static const struct file_operations dev_ .release = seq_release, }; -static struct seq_operations softnet_seq_ops = { +static const struct seq_operations softnet_seq_ops = { .start = softnet_seq_start, .next = softnet_seq_next, .stop = softnet_seq_stop, @@ -2225,6 +2219,135 @@ static const struct file_operations soft .release = seq_release, }; +static void *ptype_get_idx(loff_t pos) +{ + struct packet_type *pt = NULL; + loff_t i = 0; + int t; + + list_for_each_entry_rcu(pt, &ptype_all, list) { + if (i == pos) + return pt; + ++i; + } + + for (t = 0; t < 16; t++) { + list_for_each_entry_rcu(pt, &ptype_base[t], list) { + if (i == pos) + return pt; + ++i; + } + } + return NULL; +} + +static void *ptype_seq_start(struct seq_file *seq, loff_t *pos) +{ + rcu_read_lock(); + return *pos ? ptype_get_idx(*pos - 1) : SEQ_START_TOKEN; +} + +static void *ptype_seq_next(struct seq_file *seq, void *v, loff_t *pos) +{ + struct packet_type *pt; + struct list_head *nxt; + int hash; + + ++*pos; + if (v == SEQ_START_TOKEN) + return ptype_get_idx(0); + + pt = v; + nxt = pt->list.next; + if (pt->type == htons(ETH_P_ALL)) { + if (nxt != &ptype_all) + goto found; + hash = 0; + nxt = ptype_base[0].next; + } else + hash = ntohs(pt->type) & 15; + + while (nxt == &ptype_base[hash]) { + if (++hash >= 16) + return NULL; + nxt = ptype_base[hash].next; + } +found: + return list_entry(nxt, struct packet_type, list); +} + +static void ptype_seq_stop(struct seq_file *seq, void *v) +{ + rcu_read_unlock(); +} + +static void ptype_seq_decode(struct seq_file *seq, void *sym) +{ +#ifdef CONFIG_KALLSYMS + unsigned long offset = 0, symsize; + const char *symname; + char *modname; + char namebuf[128]; + + symname = kallsyms_lookup((unsigned long)sym, &symsize, &offset, + &modname, namebuf); + + if (symname) { + char *delim = ":"; + + if (!modname) + modname = delim = ""; + seq_printf(seq, "%s%s%s%s+0x%lx", delim, modname, delim, + symname, offset); + return; + } +#endif + + seq_printf(seq, "[%p]", sym); +} + +static int ptype_seq_show(struct seq_file *seq, void *v) +{ + struct packet_type *pt = v; + + if (v == SEQ_START_TOKEN) + seq_puts(seq, "Type Device Function\n"); + else { + if (pt->type == htons(ETH_P_ALL)) + seq_puts(seq, "ALL "); + else + seq_printf(seq, "%04x", ntohs(pt->type)); + + seq_printf(seq, " %-8s ", + pt->dev ? pt->dev->name : ""); + ptype_seq_decode(seq, pt->func); + seq_putc(seq, '\n'); + } + + return 0; +} + +static const struct seq_operations ptype_seq_ops = { + .start = ptype_seq_start, + .next = ptype_seq_next, + .stop = ptype_seq_stop, + .show = ptype_seq_show, +}; + +static int ptype_seq_open(struct inode *inode, struct file *file) +{ + return seq_open(file, &ptype_seq_ops); +} + +static const struct file_operations ptype_seq_fops = { + .owner = THIS_MODULE, + .open = ptype_seq_open, + .read = seq_read, + .llseek = seq_lseek, + .release = seq_release, +}; + + #ifdef CONFIG_WIRELESS_EXT extern int wireless_proc_init(void); #else @@ -2239,6 +2362,9 @@ static int __init dev_proc_init(void) goto out; if (!proc_net_fops_create("softnet_stat", S_IRUGO, &softnet_seq_fops)) goto out_dev; + if (!proc_net_fops_create("ptype", S_IRUGO, &ptype_seq_fops)) + goto out_dev2; + if (wireless_proc_init()) goto out_softnet; rc = 0; @@ -2246,6 +2372,8 @@ out: return rc; out_softnet: proc_net_remove("softnet_stat"); +out_dev2: + proc_net_remove("ptype"); out_dev: proc_net_remove("dev"); goto out; @@ -2847,7 +2975,7 @@ static int dev_boot_phase = 1; static DEFINE_SPINLOCK(net_todo_list_lock); static struct list_head net_todo_list = LIST_HEAD_INIT(net_todo_list); -static inline void net_set_todo(struct net_device *dev) +static void net_set_todo(struct net_device *dev) { spin_lock(&net_todo_list_lock); list_add_tail(&dev->todo_list, &net_todo_list); @@ -2888,9 +3016,7 @@ int register_netdevice(struct net_device spin_lock_init(&dev->queue_lock); spin_lock_init(&dev->_xmit_lock); dev->xmit_lock_owner = -1; -#ifdef CONFIG_NET_CLS_ACT spin_lock_init(&dev->ingress_lock); -#endif dev->iflink = -1; @@ -3002,7 +3128,7 @@ out: * chain. 0 is returned on success. A negative errno code is returned * on a failure to set up the device, or if the name is a duplicate. * - * This is a wrapper around register_netdev that takes the rtnl semaphore + * This is a wrapper around register_netdevice that takes the rtnl semaphore * and expands the device name if you passed a format string to * alloc_netdev. */ @@ -3157,6 +3283,13 @@ out: mutex_unlock(&net_todo_run_mutex); } +static struct net_device_stats *maybe_internal_stats(struct net_device *dev) +{ + if (dev->features & NETIF_F_INTERNAL_STATS) + return &dev->stats; + return NULL; +} + /** * alloc_netdev - allocate network device * @sizeof_priv: size of private data to allocate space for @@ -3192,6 +3325,7 @@ struct net_device *alloc_netdev(int size if (sizeof_priv) dev->priv = netdev_priv(dev); + dev->get_stats = maybe_internal_stats; setup(dev); strcpy(dev->name, name); return dev; diff -puN net/core/dev_mcast.c~git-net net/core/dev_mcast.c --- a/net/core/dev_mcast.c~git-net +++ a/net/core/dev_mcast.c @@ -264,7 +264,7 @@ static int dev_mc_seq_show(struct seq_fi return 0; } -static struct seq_operations dev_mc_seq_ops = { +static const struct seq_operations dev_mc_seq_ops = { .start = dev_mc_seq_start, .next = dev_mc_seq_next, .stop = dev_mc_seq_stop, diff -puN net/core/ethtool.c~git-net net/core/ethtool.c --- a/net/core/ethtool.c~git-net +++ a/net/core/ethtool.c @@ -836,7 +836,7 @@ int dev_ethtool(struct ifreq *ifr) return -EPERM; } - if(dev->ethtool_ops->begin) + if (dev->ethtool_ops->begin) if ((rc = dev->ethtool_ops->begin(dev)) < 0) return rc; @@ -952,7 +952,7 @@ int dev_ethtool(struct ifreq *ifr) rc = -EOPNOTSUPP; } - if(dev->ethtool_ops->complete) + if (dev->ethtool_ops->complete) dev->ethtool_ops->complete(dev); if (old_features != dev->features) diff -puN net/core/fib_rules.c~git-net net/core/fib_rules.c --- a/net/core/fib_rules.c~git-net +++ a/net/core/fib_rules.c @@ -44,6 +44,12 @@ static void rules_ops_put(struct fib_rul module_put(ops->owner); } +static void flush_route_cache(struct fib_rules_ops *ops) +{ + if (ops->flush_cache) + ops->flush_cache(); +} + int fib_rules_register(struct fib_rules_ops *ops) { int err = -EEXIST; @@ -132,10 +138,25 @@ int fib_rules_lookup(struct fib_rules_op rcu_read_lock(); list_for_each_entry_rcu(rule, ops->rules_list, list) { +jumped: if (!fib_rule_match(rule, ops, fl, flags)) continue; - err = ops->action(rule, fl, flags, arg); + if (rule->action == FR_ACT_GOTO) { + struct fib_rule *target; + + target = rcu_dereference(rule->ctarget); + if (target == NULL) { + continue; + } else { + rule = target; + goto jumped; + } + } else if (rule->action == FR_ACT_NOP) + continue; + else + err = ops->action(rule, fl, flags, arg); + if (err != -EAGAIN) { fib_rule_get(rule); arg->rule = rule; @@ -174,13 +195,13 @@ errout: return err; } -int fib_nl_newrule(struct sk_buff *skb, struct nlmsghdr* nlh, void *arg) +static int fib_nl_newrule(struct sk_buff *skb, struct nlmsghdr* nlh, void *arg) { struct fib_rule_hdr *frh = nlmsg_data(nlh); struct fib_rules_ops *ops = NULL; struct fib_rule *rule, *r, *last = NULL; struct nlattr *tb[FRA_MAX+1]; - int err = -EINVAL; + int err = -EINVAL, unresolved = 0; if (nlh->nlmsg_len < nlmsg_msg_size(sizeof(*frh))) goto errout; @@ -237,6 +258,28 @@ int fib_nl_newrule(struct sk_buff *skb, if (!rule->pref && ops->default_pref) rule->pref = ops->default_pref(); + err = -EINVAL; + if (tb[FRA_GOTO]) { + if (rule->action != FR_ACT_GOTO) + goto errout_free; + + rule->target = nla_get_u32(tb[FRA_GOTO]); + /* Backward jumps are prohibited to avoid endless loops */ + if (rule->target <= rule->pref) + goto errout_free; + + list_for_each_entry(r, ops->rules_list, list) { + if (r->pref == rule->target) { + rule->ctarget = r; + break; + } + } + + if (rule->ctarget == NULL) + unresolved = 1; + } else if (rule->action == FR_ACT_GOTO) + goto errout_free; + err = ops->configure(rule, skb, nlh, frh, tb); if (err < 0) goto errout_free; @@ -249,12 +292,35 @@ int fib_nl_newrule(struct sk_buff *skb, fib_rule_get(rule); + if (ops->unresolved_rules) { + /* + * There are unresolved goto rules in the list, check if + * any of them are pointing to this new rule. + */ + list_for_each_entry(r, ops->rules_list, list) { + if (r->action == FR_ACT_GOTO && + r->target == rule->pref) { + BUG_ON(r->ctarget != NULL); + rcu_assign_pointer(r->ctarget, rule); + if (--ops->unresolved_rules == 0) + break; + } + } + } + + if (rule->action == FR_ACT_GOTO) + ops->nr_goto_rules++; + + if (unresolved) + ops->unresolved_rules++; + if (last) list_add_rcu(&rule->list, &last->list); else list_add_rcu(&rule->list, ops->rules_list); notify_rule_change(RTM_NEWRULE, rule, ops, nlh, NETLINK_CB(skb).pid); + flush_route_cache(ops); rules_ops_put(ops); return 0; @@ -265,11 +331,11 @@ errout: return err; } -int fib_nl_delrule(struct sk_buff *skb, struct nlmsghdr* nlh, void *arg) +static int fib_nl_delrule(struct sk_buff *skb, struct nlmsghdr* nlh, void *arg) { struct fib_rule_hdr *frh = nlmsg_data(nlh); struct fib_rules_ops *ops = NULL; - struct fib_rule *rule; + struct fib_rule *rule, *tmp; struct nlattr *tb[FRA_MAX+1]; int err = -EINVAL; @@ -322,10 +388,30 @@ int fib_nl_delrule(struct sk_buff *skb, } list_del_rcu(&rule->list); + + if (rule->action == FR_ACT_GOTO) + ops->nr_goto_rules--; + + /* + * Check if this rule is a target to any of them. If so, + * disable them. As this operation is eventually very + * expensive, it is only performed if goto rules have + * actually been added. + */ + if (ops->nr_goto_rules > 0) { + list_for_each_entry(tmp, ops->rules_list, list) { + if (tmp->ctarget == rule) { + rcu_assign_pointer(tmp->ctarget, NULL); + ops->unresolved_rules++; + } + } + } + synchronize_rcu(); notify_rule_change(RTM_DELRULE, rule, ops, nlh, NETLINK_CB(skb).pid); fib_rule_put(rule); + flush_route_cache(ops); rules_ops_put(ops); return 0; } @@ -371,9 +457,16 @@ static int fib_nl_fill_rule(struct sk_bu frh->action = rule->action; frh->flags = rule->flags; - if (rule->ifname[0]) + if (rule->action == FR_ACT_GOTO && rule->ctarget == NULL) + frh->flags |= FIB_RULE_UNRESOLVED; + + if (rule->ifname[0]) { NLA_PUT_STRING(skb, FRA_IFNAME, rule->ifname); + if (rule->ifindex == -1) + frh->flags |= FIB_RULE_DEV_DETACHED; + } + if (rule->pref) NLA_PUT_U32(skb, FRA_PRIORITY, rule->pref); @@ -383,6 +476,9 @@ static int fib_nl_fill_rule(struct sk_bu if (rule->mark_mask || rule->mark) NLA_PUT_U32(skb, FRA_FWMASK, rule->mark_mask); + if (rule->target) + NLA_PUT_U32(skb, FRA_GOTO, rule->target); + if (ops->fill(rule, skb, nlh, frh) < 0) goto nla_put_failure; @@ -393,19 +489,14 @@ nla_put_failure: return -EMSGSIZE; } -int fib_rules_dump(struct sk_buff *skb, struct netlink_callback *cb, int family) +static int dump_rules(struct sk_buff *skb, struct netlink_callback *cb, + struct fib_rules_ops *ops) { int idx = 0; struct fib_rule *rule; - struct fib_rules_ops *ops; - - ops = lookup_rules_ops(family); - if (ops == NULL) - return -EAFNOSUPPORT; - rcu_read_lock(); - list_for_each_entry_rcu(rule, ops->rules_list, list) { - if (idx < cb->args[0]) + list_for_each_entry(rule, ops->rules_list, list) { + if (idx < cb->args[1]) goto skip; if (fib_nl_fill_rule(skb, rule, NETLINK_CB(cb->skb).pid, @@ -415,14 +506,44 @@ int fib_rules_dump(struct sk_buff *skb, skip: idx++; } - rcu_read_unlock(); - cb->args[0] = idx; + cb->args[1] = idx; rules_ops_put(ops); return skb->len; } -EXPORT_SYMBOL_GPL(fib_rules_dump); +static int fib_nl_dumprule(struct sk_buff *skb, struct netlink_callback *cb) +{ + struct fib_rules_ops *ops; + int idx = 0, family; + + family = rtnl_msg_family(cb->nlh); + if (family != AF_UNSPEC) { + /* Protocol specific dump request */ + ops = lookup_rules_ops(family); + if (ops == NULL) + return -EAFNOSUPPORT; + + return dump_rules(skb, cb, ops); + } + + rcu_read_lock(); + list_for_each_entry_rcu(ops, &rules_ops, list) { + if (idx < cb->args[0] || !try_module_get(ops->owner)) + goto skip; + + if (dump_rules(skb, cb, ops) < 0) + break; + + cb->args[1] = 0; + skip: + idx++; + } + rcu_read_unlock(); + cb->args[0] = idx; + + return skb->len; +} static void notify_rule_change(int event, struct fib_rule *rule, struct fib_rules_ops *ops, struct nlmsghdr *nlh, @@ -501,6 +622,10 @@ static struct notifier_block fib_rules_n static int __init fib_rules_init(void) { + rtnl_register(PF_UNSPEC, RTM_NEWRULE, fib_nl_newrule, NULL); + rtnl_register(PF_UNSPEC, RTM_DELRULE, fib_nl_delrule, NULL); + rtnl_register(PF_UNSPEC, RTM_GETRULE, NULL, fib_nl_dumprule); + return register_netdevice_notifier(&fib_rules_notifier); } diff -puN net/core/filter.c~git-net net/core/filter.c --- a/net/core/filter.c~git-net +++ a/net/core/filter.c @@ -42,11 +42,11 @@ static void *__load_pointer(struct sk_bu u8 *ptr = NULL; if (k >= SKF_NET_OFF) - ptr = skb->nh.raw + k - SKF_NET_OFF; + ptr = skb_network_header(skb) + k - SKF_NET_OFF; else if (k >= SKF_LL_OFF) - ptr = skb->mac.raw + k - SKF_LL_OFF; + ptr = skb_mac_header(skb) + k - SKF_LL_OFF; - if (ptr >= skb->head && ptr < skb->tail) + if (ptr >= skb->head && ptr < skb_tail_pointer(skb)) return ptr; return NULL; } diff -puN net/core/gen_stats.c~git-net net/core/gen_stats.c --- a/net/core/gen_stats.c~git-net +++ a/net/core/gen_stats.c @@ -61,7 +61,7 @@ gnet_stats_start_copy_compat(struct sk_b spin_lock_bh(lock); d->lock = lock; if (type) - d->tail = (struct rtattr *) skb->tail; + d->tail = (struct rtattr *)skb_tail_pointer(skb); d->skb = skb; d->compat_tc_stats = tc_stats_type; d->compat_xstats = xstats_type; @@ -212,7 +212,7 @@ int gnet_stats_finish_copy(struct gnet_dump *d) { if (d->tail) - d->tail->rta_len = d->skb->tail - (u8 *) d->tail; + d->tail->rta_len = skb_tail_pointer(d->skb) - (u8 *)d->tail; if (d->compat_tc_stats) if (gnet_stats_copy(d, d->compat_tc_stats, &d->tc_stats, diff -puN net/core/link_watch.c~git-net net/core/link_watch.c --- a/net/core/link_watch.c~git-net +++ a/net/core/link_watch.c @@ -79,7 +79,7 @@ static void rfc2863_policy(struct net_de case IF_LINK_MODE_DEFAULT: default: break; - }; + } dev->operstate = operstate; diff -puN net/core/neighbour.c~git-net net/core/neighbour.c --- a/net/core/neighbour.c~git-net +++ a/net/core/neighbour.c @@ -1125,7 +1125,7 @@ int neigh_compat_output(struct sk_buff * { struct net_device *dev = skb->dev; - __skb_pull(skb, skb->nh.raw - skb->data); + __skb_pull(skb, skb_network_offset(skb)); if (dev->hard_header && dev->hard_header(skb, dev, ntohs(skb->protocol), NULL, NULL, @@ -1147,7 +1147,7 @@ int neigh_resolve_output(struct sk_buff if (!dst || !(neigh = dst->neighbour)) goto discard; - __skb_pull(skb, skb->nh.raw - skb->data); + __skb_pull(skb, skb_network_offset(skb)); if (!neigh_event_send(neigh, skb)) { int err; @@ -1190,7 +1190,7 @@ int neigh_connected_output(struct sk_buf struct neighbour *neigh = dst->neighbour; struct net_device *dev = neigh->dev; - __skb_pull(skb, skb->nh.raw - skb->data); + __skb_pull(skb, skb_network_offset(skb)); read_lock_bh(&neigh->lock); err = dev->hard_header(skb, dev, ntohs(skb->protocol), @@ -1441,7 +1441,7 @@ int neigh_table_clear(struct neigh_table return 0; } -int neigh_delete(struct sk_buff *skb, struct nlmsghdr *nlh, void *arg) +static int neigh_delete(struct sk_buff *skb, struct nlmsghdr *nlh, void *arg) { struct ndmsg *ndm; struct nlattr *dst_attr; @@ -1506,7 +1506,7 @@ out: return err; } -int neigh_add(struct sk_buff *skb, struct nlmsghdr *nlh, void *arg) +static int neigh_add(struct sk_buff *skb, struct nlmsghdr *nlh, void *arg) { struct ndmsg *ndm; struct nlattr *tb[NDA_MAX+1]; @@ -1786,7 +1786,7 @@ static struct nla_policy nl_ntbl_parm_po [NDTPA_LOCKTIME] = { .type = NLA_U64 }, }; -int neightbl_set(struct sk_buff *skb, struct nlmsghdr *nlh, void *arg) +static int neightbl_set(struct sk_buff *skb, struct nlmsghdr *nlh, void *arg) { struct neigh_table *tbl; struct ndtmsg *ndtmsg; @@ -1910,7 +1910,7 @@ errout: return err; } -int neightbl_dump_info(struct sk_buff *skb, struct netlink_callback *cb) +static int neightbl_dump_info(struct sk_buff *skb, struct netlink_callback *cb) { int family, tidx, nidx = 0; int tbl_skip = cb->args[0]; @@ -2034,7 +2034,7 @@ out: return rc; } -int neigh_dump_info(struct sk_buff *skb, struct netlink_callback *cb) +static int neigh_dump_info(struct sk_buff *skb, struct netlink_callback *cb) { struct neigh_table *tbl; int t, family, s_t; @@ -2393,7 +2393,7 @@ static int neigh_stat_seq_show(struct se return 0; } -static struct seq_operations neigh_stat_seq_ops = { +static const struct seq_operations neigh_stat_seq_ops = { .start = neigh_stat_seq_start, .next = neigh_stat_seq_next, .stop = neigh_stat_seq_stop, @@ -2746,14 +2746,26 @@ void neigh_sysctl_unregister(struct neig #endif /* CONFIG_SYSCTL */ +static int __init neigh_init(void) +{ + rtnl_register(PF_UNSPEC, RTM_NEWNEIGH, neigh_add, NULL); + rtnl_register(PF_UNSPEC, RTM_DELNEIGH, neigh_delete, NULL); + rtnl_register(PF_UNSPEC, RTM_GETNEIGH, NULL, neigh_dump_info); + + rtnl_register(PF_UNSPEC, RTM_GETNEIGHTBL, NULL, neightbl_dump_info); + rtnl_register(PF_UNSPEC, RTM_SETNEIGHTBL, neightbl_set, NULL); + + return 0; +} + +subsys_initcall(neigh_init); + EXPORT_SYMBOL(__neigh_event_send); EXPORT_SYMBOL(neigh_changeaddr); EXPORT_SYMBOL(neigh_compat_output); EXPORT_SYMBOL(neigh_connected_output); EXPORT_SYMBOL(neigh_create); -EXPORT_SYMBOL(neigh_delete); EXPORT_SYMBOL(neigh_destroy); -EXPORT_SYMBOL(neigh_dump_info); EXPORT_SYMBOL(neigh_event_ns); EXPORT_SYMBOL(neigh_ifdown); EXPORT_SYMBOL(neigh_lookup); diff -puN net/core/net-sysfs.c~git-net net/core/net-sysfs.c --- a/net/core/net-sysfs.c~git-net +++ a/net/core/net-sysfs.c @@ -352,8 +352,8 @@ static ssize_t wireless_show(struct devi read_lock(&dev_base_lock); if (dev_isalive(dev)) { - if(dev->wireless_handlers && - dev->wireless_handlers->get_wireless_stats) + if (dev->wireless_handlers && + dev->wireless_handlers->get_wireless_stats) iw = dev->wireless_handlers->get_wireless_stats(dev); if (iw != NULL) ret = (*format)(iw, buf); diff -puN net/core/netpoll.c~git-net net/core/netpoll.c --- a/net/core/netpoll.c~git-net +++ a/net/core/netpoll.c @@ -86,7 +86,7 @@ static __sum16 checksum_udp(struct sk_bu { __wsum psum; - if (uh->check == 0 || skb->ip_summed == CHECKSUM_UNNECESSARY) + if (uh->check == 0 || skb_csum_unnecessary(skb)) return 0; psum = csum_tcpudp_nofold(saddr, daddr, ulen, IPPROTO_UDP, 0); @@ -293,10 +293,12 @@ void netpoll_send_udp(struct netpoll *np if (!skb) return; - memcpy(skb->data, msg, len); + skb_copy_to_linear_data(skb, msg, len); skb->len += len; - skb->h.uh = udph = (struct udphdr *) skb_push(skb, sizeof(*udph)); + skb_push(skb, sizeof(*udph)); + skb_reset_transport_header(skb); + udph = udp_hdr(skb); udph->source = htons(np->local_port); udph->dest = htons(np->remote_port); udph->len = htons(udp_len); @@ -308,7 +310,9 @@ void netpoll_send_udp(struct netpoll *np if (udph->check == 0) udph->check = CSUM_MANGLED_0; - skb->nh.iph = iph = (struct iphdr *)skb_push(skb, sizeof(*iph)); + skb_push(skb, sizeof(*iph)); + skb_reset_network_header(skb); + iph = ip_hdr(skb); /* iph->version = 4; iph->ihl = 5; */ put_unaligned(0x45, (unsigned char *)iph); @@ -324,7 +328,7 @@ void netpoll_send_udp(struct netpoll *np iph->check = ip_fast_csum((unsigned char *)iph, iph->ihl); eth = (struct ethhdr *) skb_push(skb, ETH_HLEN); - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); skb->protocol = eth->h_proto = htons(ETH_P_IP); memcpy(eth->h_source, np->local_mac, 6); memcpy(eth->h_dest, np->remote_mac, 6); @@ -359,8 +363,9 @@ static void arp_reply(struct sk_buff *sk (2 * sizeof(u32))))) return; - skb->h.raw = skb->nh.raw = skb->data; - arp = skb->nh.arph; + skb_reset_network_header(skb); + skb_reset_transport_header(skb); + arp = arp_hdr(skb); if ((arp->ar_hrd != htons(ARPHRD_ETHER) && arp->ar_hrd != htons(ARPHRD_IEEE802)) || @@ -389,7 +394,7 @@ static void arp_reply(struct sk_buff *sk if (!send_skb) return; - send_skb->nh.raw = send_skb->data; + skb_reset_network_header(send_skb); arp = (struct arphdr *) skb_put(send_skb, size); send_skb->dev = skb->dev; send_skb->protocol = htons(ETH_P_ARP); @@ -443,7 +448,7 @@ int __netpoll_rx(struct sk_buff *skb) goto out; /* check if netpoll clients need ARP */ - if (skb->protocol == __constant_htons(ETH_P_ARP) && + if (skb->protocol == htons(ETH_P_ARP) && atomic_read(&trapped)) { skb_queue_tail(&npi->arp_tx, skb); return 1; diff -puN net/core/pktgen.c~git-net net/core/pktgen.c --- a/net/core/pktgen.c~git-net +++ a/net/core/pktgen.c @@ -164,14 +164,11 @@ #define VERSION "pktgen v2.68: Packet Generator for packet performance testing.\n" -/* #define PG_DEBUG(a) a */ -#define PG_DEBUG(a) - /* The buckets are exponential in 'width' */ #define LAT_BUCKETS_MAX 32 #define IP_NAME_SZ 32 #define MAX_MPLS_LABELS 16 /* This is the max label stack depth */ -#define MPLS_STACK_BOTTOM __constant_htonl(0x00000100) +#define MPLS_STACK_BOTTOM htonl(0x00000100) /* Device flag bits */ #define F_IPSRC_RND (1<<0) /* IP-Src Random */ @@ -214,15 +211,11 @@ struct flow_state { }; struct pktgen_dev { - /* * Try to keep frequent/infrequent used vars. separated. */ - - char ifname[IFNAMSIZ]; - char result[512]; - - struct pktgen_thread *pg_thread; /* the owner */ + struct proc_dir_entry *entry; /* proc file */ + struct pktgen_thread *pg_thread;/* the owner */ struct list_head list; /* Used for chaining in the thread's run-queue */ int running; /* if this changes to false, the test will stop */ @@ -349,6 +342,8 @@ struct pktgen_dev { unsigned cflows; /* Concurrent flows (config) */ unsigned lflow; /* Flow length (config) */ unsigned nflows; /* accumulated flows (stats) */ + + char result[512]; }; struct pktgen_hdr { @@ -468,17 +463,6 @@ static inline __u64 pg_div64(__u64 n, __ return tmp; } -static inline u32 pktgen_random(void) -{ -#if 0 - __u32 n; - get_random_bytes(&n, 4); - return n; -#else - return net_random(); -#endif -} - static inline __u64 getCurMs(void) { struct timeval tv; @@ -512,7 +496,7 @@ static void pktgen_stop_all_threads_ifs( static int pktgen_stop_device(struct pktgen_dev *pkt_dev); static void pktgen_stop(struct pktgen_thread *t); static void pktgen_clear_counters(struct pktgen_dev *pkt_dev); -static int pktgen_mark_device(const char *ifname); + static unsigned int scan_ip6(const char *s, char ip[16]); static unsigned int fmt_ip6(char *s, const char ip[16]); @@ -606,7 +590,7 @@ static int pktgen_if_show(struct seq_fil " frags: %d delay: %u clone_skb: %d ifname: %s\n", pkt_dev->nfrags, 1000 * pkt_dev->delay_us + pkt_dev->delay_ns, - pkt_dev->clone_skb, pkt_dev->ifname); + pkt_dev->clone_skb, pkt_dev->odev->name); seq_printf(seq, " flows: %u flowlen: %u\n", pkt_dev->cflows, pkt_dev->lflow); @@ -661,7 +645,7 @@ static int pktgen_if_show(struct seq_fil if (pkt_dev->nr_labels) { unsigned i; seq_printf(seq, " mpls: "); - for(i = 0; i < pkt_dev->nr_labels; i++) + for (i = 0; i < pkt_dev->nr_labels; i++) seq_printf(seq, "%08x%s", ntohl(pkt_dev->labels[i]), i == pkt_dev->nr_labels-1 ? "\n" : ", "); } @@ -766,7 +750,7 @@ static int hex32_arg(const char __user * int i = 0; *num = 0; - for(; i < maxlen; i++) { + for (; i < maxlen; i++) { char c; *num <<= 4; if (get_user(c, &user_buffer[i])) @@ -802,7 +786,7 @@ static int count_trail_chars(const char break; default: goto done; - }; + } } done: return i; @@ -845,7 +829,7 @@ static int strn_len(const char __user * break; default: break; - }; + } } done_str: return i; @@ -874,7 +858,7 @@ static ssize_t get_labels(const char __u n++; if (n >= MAX_MPLS_LABELS) return -E2BIG; - } while(c == ','); + } while (c == ','); pkt_dev->nr_labels = n; return i; @@ -1503,7 +1487,7 @@ static ssize_t pktgen_if_write(struct fi if (len < 0) { return len; } i += len; offset = sprintf(pg_result, "OK: mpls="); - for(n = 0; n < pkt_dev->nr_labels; n++) + for (n = 0; n < pkt_dev->nr_labels; n++) offset += sprintf(pg_result + offset, "%08x%s", ntohl(pkt_dev->labels[n]), n == pkt_dev->nr_labels-1 ? "" : ","); @@ -1697,13 +1681,13 @@ static int pktgen_thread_show(struct seq if_lock(t); list_for_each_entry(pkt_dev, &t->if_list, list) if (pkt_dev->running) - seq_printf(seq, "%s ", pkt_dev->ifname); + seq_printf(seq, "%s ", pkt_dev->odev->name); seq_printf(seq, "\nStopped: "); list_for_each_entry(pkt_dev, &t->if_list, list) if (!pkt_dev->running) - seq_printf(seq, "%s ", pkt_dev->ifname); + seq_printf(seq, "%s ", pkt_dev->odev->name); if (t->result[0]) seq_printf(seq, "\nResult: %s\n", t->result); @@ -1849,16 +1833,14 @@ static struct pktgen_dev *__pktgen_NN_th /* * mark a device for removal */ -static int pktgen_mark_device(const char *ifname) +static void pktgen_mark_device(const char *ifname) { struct pktgen_dev *pkt_dev = NULL; const int max_tries = 10, msec_per_try = 125; int i = 0; - int ret = 0; mutex_lock(&pktgen_thread_lock); - PG_DEBUG(printk("pktgen: pktgen_mark_device marking %s for removal\n", - ifname)); + pr_debug("pktgen: pktgen_mark_device marking %s for removal\n", ifname); while (1) { @@ -1867,8 +1849,8 @@ static int pktgen_mark_device(const char break; /* success */ mutex_unlock(&pktgen_thread_lock); - PG_DEBUG(printk("pktgen: pktgen_mark_device waiting for %s " - "to disappear....\n", ifname)); + pr_debug("pktgen: pktgen_mark_device waiting for %s " + "to disappear....\n", ifname); schedule_timeout_interruptible(msecs_to_jiffies(msec_per_try)); mutex_lock(&pktgen_thread_lock); @@ -1876,79 +1858,91 @@ static int pktgen_mark_device(const char printk("pktgen_mark_device: timed out after waiting " "%d msec for device %s to be removed\n", msec_per_try * i, ifname); - ret = 1; break; } } mutex_unlock(&pktgen_thread_lock); +} - return ret; +static void pktgen_change_name(struct net_device *dev) +{ + struct pktgen_thread *t; + + list_for_each_entry(t, &pktgen_threads, th_list) { + struct pktgen_dev *pkt_dev; + + list_for_each_entry(pkt_dev, &t->if_list, list) { + if (pkt_dev->odev != dev) + continue; + + remove_proc_entry(pkt_dev->entry->name, pg_proc_dir); + + pkt_dev->entry = create_proc_entry(dev->name, 0600, + pg_proc_dir); + if (!pkt_dev->entry) + printk(KERN_ERR "pktgen: can't move proc " + " entry for '%s'\n", dev->name); + break; + } + } } static int pktgen_device_event(struct notifier_block *unused, unsigned long event, void *ptr) { - struct net_device *dev = (struct net_device *)(ptr); + struct net_device *dev = ptr; /* It is OK that we do not hold the group lock right now, * as we run under the RTNL lock. */ switch (event) { - case NETDEV_CHANGEADDR: - case NETDEV_GOING_DOWN: - case NETDEV_DOWN: - case NETDEV_UP: - /* Ignore for now */ + case NETDEV_CHANGENAME: + pktgen_change_name(dev); break; case NETDEV_UNREGISTER: pktgen_mark_device(dev->name); break; - }; + } return NOTIFY_DONE; } /* Associate pktgen_dev with a device. */ -static struct net_device *pktgen_setup_dev(struct pktgen_dev *pkt_dev) +static int pktgen_setup_dev(struct pktgen_dev *pkt_dev, const char *ifname) { struct net_device *odev; + int err; /* Clean old setups */ - if (pkt_dev->odev) { dev_put(pkt_dev->odev); pkt_dev->odev = NULL; } - odev = dev_get_by_name(pkt_dev->ifname); - + odev = dev_get_by_name(ifname); if (!odev) { - printk("pktgen: no such netdevice: \"%s\"\n", pkt_dev->ifname); - goto out; + printk("pktgen: no such netdevice: \"%s\"\n", ifname); + return -ENODEV; } + if (odev->type != ARPHRD_ETHER) { - printk("pktgen: not an ethernet device: \"%s\"\n", - pkt_dev->ifname); - goto out_put; - } - if (!netif_running(odev)) { - printk("pktgen: device is down: \"%s\"\n", pkt_dev->ifname); - goto out_put; + printk("pktgen: not an ethernet device: \"%s\"\n", ifname); + err = -EINVAL; + } else if (!netif_running(odev)) { + printk("pktgen: device is down: \"%s\"\n", ifname); + err = -ENETDOWN; + } else { + pkt_dev->odev = odev; + return 0; } - pkt_dev->odev = odev; - - return pkt_dev->odev; -out_put: dev_put(odev); -out: - return NULL; - + return err; } /* Read pkt_dev from the interface and set up internal pktgen_dev @@ -1956,10 +1950,6 @@ out: */ static void pktgen_setup_inject(struct pktgen_dev *pkt_dev) { - /* Try once more, just in case it works now. */ - if (!pkt_dev->odev) - pktgen_setup_dev(pkt_dev); - if (!pkt_dev->odev) { printk("pktgen: ERROR: pkt_dev->odev == NULL in setup_inject.\n"); sprintf(pkt_dev->result, @@ -2096,7 +2086,7 @@ static void mod_cur_headers(struct pktge int flow = 0; if (pkt_dev->cflows) { - flow = pktgen_random() % pkt_dev->cflows; + flow = random32() % pkt_dev->cflows; if (pkt_dev->flows[flow].count > pkt_dev->lflow) pkt_dev->flows[flow].count = 0; @@ -2108,7 +2098,7 @@ static void mod_cur_headers(struct pktge __u32 tmp; if (pkt_dev->flags & F_MACSRC_RND) - mc = pktgen_random() % (pkt_dev->src_mac_count); + mc = random32() % pkt_dev->src_mac_count; else { mc = pkt_dev->cur_src_mac_offset++; if (pkt_dev->cur_src_mac_offset > @@ -2134,7 +2124,7 @@ static void mod_cur_headers(struct pktge __u32 tmp; if (pkt_dev->flags & F_MACDST_RND) - mc = pktgen_random() % (pkt_dev->dst_mac_count); + mc = random32() % pkt_dev->dst_mac_count; else { mc = pkt_dev->cur_dst_mac_offset++; @@ -2158,27 +2148,26 @@ static void mod_cur_headers(struct pktge if (pkt_dev->flags & F_MPLS_RND) { unsigned i; - for(i = 0; i < pkt_dev->nr_labels; i++) + for (i = 0; i < pkt_dev->nr_labels; i++) if (pkt_dev->labels[i] & MPLS_STACK_BOTTOM) pkt_dev->labels[i] = MPLS_STACK_BOTTOM | - ((__force __be32)pktgen_random() & + ((__force __be32)random32() & htonl(0x000fffff)); } if ((pkt_dev->flags & F_VID_RND) && (pkt_dev->vlan_id != 0xffff)) { - pkt_dev->vlan_id = pktgen_random() % 4096; + pkt_dev->vlan_id = random32() & (4096-1); } if ((pkt_dev->flags & F_SVID_RND) && (pkt_dev->svlan_id != 0xffff)) { - pkt_dev->svlan_id = pktgen_random() % 4096; + pkt_dev->svlan_id = random32() & (4096 - 1); } if (pkt_dev->udp_src_min < pkt_dev->udp_src_max) { if (pkt_dev->flags & F_UDPSRC_RND) - pkt_dev->cur_udp_src = - ((pktgen_random() % - (pkt_dev->udp_src_max - pkt_dev->udp_src_min)) + - pkt_dev->udp_src_min); + pkt_dev->cur_udp_src = random32() % + (pkt_dev->udp_src_max - pkt_dev->udp_src_min) + + pkt_dev->udp_src_min; else { pkt_dev->cur_udp_src++; @@ -2189,10 +2178,9 @@ static void mod_cur_headers(struct pktge if (pkt_dev->udp_dst_min < pkt_dev->udp_dst_max) { if (pkt_dev->flags & F_UDPDST_RND) { - pkt_dev->cur_udp_dst = - ((pktgen_random() % - (pkt_dev->udp_dst_max - pkt_dev->udp_dst_min)) + - pkt_dev->udp_dst_min); + pkt_dev->cur_udp_dst = random32() % + (pkt_dev->udp_dst_max - pkt_dev->udp_dst_min) + + pkt_dev->udp_dst_min; } else { pkt_dev->cur_udp_dst++; if (pkt_dev->cur_udp_dst >= pkt_dev->udp_dst_max) @@ -2207,7 +2195,7 @@ static void mod_cur_headers(struct pktge saddr_max))) { __u32 t; if (pkt_dev->flags & F_IPSRC_RND) - t = ((pktgen_random() % (imx - imn)) + imn); + t = random32() % (imx - imn) + imn; else { t = ntohl(pkt_dev->cur_saddr); t++; @@ -2228,14 +2216,13 @@ static void mod_cur_headers(struct pktge __be32 s; if (pkt_dev->flags & F_IPDST_RND) { - t = pktgen_random() % (imx - imn) + imn; + t = random32() % (imx - imn) + imn; s = htonl(t); while (LOOPBACK(s) || MULTICAST(s) || BADCLASS(s) || ZERONET(s) || LOCAL_MCAST(s)) { - t = (pktgen_random() % - (imx - imn)) + imn; + t = random32() % (imx - imn) + imn; s = htonl(t); } pkt_dev->cur_daddr = s; @@ -2267,7 +2254,7 @@ static void mod_cur_headers(struct pktge for (i = 0; i < 4; i++) { pkt_dev->cur_in6_daddr.s6_addr32[i] = - (((__force __be32)pktgen_random() | + (((__force __be32)random32() | pkt_dev->min_in6_daddr.s6_addr32[i]) & pkt_dev->max_in6_daddr.s6_addr32[i]); } @@ -2277,9 +2264,9 @@ static void mod_cur_headers(struct pktge if (pkt_dev->min_pkt_size < pkt_dev->max_pkt_size) { __u32 t; if (pkt_dev->flags & F_TXSIZE_RND) { - t = ((pktgen_random() % - (pkt_dev->max_pkt_size - pkt_dev->min_pkt_size)) - + pkt_dev->min_pkt_size); + t = random32() % + (pkt_dev->max_pkt_size - pkt_dev->min_pkt_size) + + pkt_dev->min_pkt_size; } else { t = pkt_dev->cur_pkt_size + 1; if (t > pkt_dev->max_pkt_size) @@ -2294,7 +2281,7 @@ static void mod_cur_headers(struct pktge static void mpls_push(__be32 *mpls, struct pktgen_dev *pkt_dev) { unsigned i; - for(i = 0; i < pkt_dev->nr_labels; i++) { + for (i = 0; i < pkt_dev->nr_labels; i++) { *mpls++ = pkt_dev->labels[i] & ~MPLS_STACK_BOTTOM; } mpls--; @@ -2316,7 +2303,7 @@ static struct sk_buff *fill_packet_ipv4( int datalen, iplen; struct iphdr *iph; struct pktgen_hdr *pgh = NULL; - __be16 protocol = __constant_htons(ETH_P_IP); + __be16 protocol = htons(ETH_P_IP); __be32 *mpls; __be16 *vlan_tci = NULL; /* Encapsulates priority and VLAN ID */ __be16 *vlan_encapsulated_proto = NULL; /* packet type ID field (or len) for VLAN tag */ @@ -2325,10 +2312,10 @@ static struct sk_buff *fill_packet_ipv4( if (pkt_dev->nr_labels) - protocol = __constant_htons(ETH_P_MPLS_UC); + protocol = htons(ETH_P_MPLS_UC); if (pkt_dev->vlan_id != 0xffff) - protocol = __constant_htons(ETH_P_8021Q); + protocol = htons(ETH_P_8021Q); /* Update any of the values, used when we're incrementing various * fields. @@ -2354,24 +2341,28 @@ static struct sk_buff *fill_packet_ipv4( mpls_push(mpls, pkt_dev); if (pkt_dev->vlan_id != 0xffff) { - if(pkt_dev->svlan_id != 0xffff) { + if (pkt_dev->svlan_id != 0xffff) { svlan_tci = (__be16 *)skb_put(skb, sizeof(__be16)); *svlan_tci = build_tci(pkt_dev->svlan_id, pkt_dev->svlan_cfi, pkt_dev->svlan_p); svlan_encapsulated_proto = (__be16 *)skb_put(skb, sizeof(__be16)); - *svlan_encapsulated_proto = __constant_htons(ETH_P_8021Q); + *svlan_encapsulated_proto = htons(ETH_P_8021Q); } vlan_tci = (__be16 *)skb_put(skb, sizeof(__be16)); *vlan_tci = build_tci(pkt_dev->vlan_id, pkt_dev->vlan_cfi, pkt_dev->vlan_p); vlan_encapsulated_proto = (__be16 *)skb_put(skb, sizeof(__be16)); - *vlan_encapsulated_proto = __constant_htons(ETH_P_IP); + *vlan_encapsulated_proto = htons(ETH_P_IP); } - iph = (struct iphdr *)skb_put(skb, sizeof(struct iphdr)); - udph = (struct udphdr *)skb_put(skb, sizeof(struct udphdr)); + skb->network_header = skb->tail; + skb->transport_header = skb->network_header + sizeof(struct iphdr); + skb_put(skb, sizeof(struct iphdr) + sizeof(struct udphdr)); + + iph = ip_hdr(skb); + udph = udp_hdr(skb); memcpy(eth, pkt_dev->hh, 12); *(__be16 *) & eth[12] = protocol; @@ -2400,12 +2391,11 @@ static struct sk_buff *fill_packet_ipv4( iph->check = 0; iph->check = ip_fast_csum((void *)iph, iph->ihl); skb->protocol = protocol; - skb->mac.raw = ((u8 *) iph) - 14 - pkt_dev->nr_labels*sizeof(u32) - - VLAN_TAG_SIZE(pkt_dev) - SVLAN_TAG_SIZE(pkt_dev); + skb->mac_header = (skb->network_header - ETH_HLEN - + pkt_dev->nr_labels * sizeof(u32) - + VLAN_TAG_SIZE(pkt_dev) - SVLAN_TAG_SIZE(pkt_dev)); skb->dev = odev; skb->pkt_type = PACKET_HOST; - skb->nh.iph = iph; - skb->h.uh = udph; if (pkt_dev->nfrags <= 0) pgh = (struct pktgen_hdr *)skb_put(skb, datalen); @@ -2654,7 +2644,7 @@ static struct sk_buff *fill_packet_ipv6( int datalen; struct ipv6hdr *iph; struct pktgen_hdr *pgh = NULL; - __be16 protocol = __constant_htons(ETH_P_IPV6); + __be16 protocol = htons(ETH_P_IPV6); __be32 *mpls; __be16 *vlan_tci = NULL; /* Encapsulates priority and VLAN ID */ __be16 *vlan_encapsulated_proto = NULL; /* packet type ID field (or len) for VLAN tag */ @@ -2662,10 +2652,10 @@ static struct sk_buff *fill_packet_ipv6( __be16 *svlan_encapsulated_proto = NULL; /* packet type ID field (or len) for SVLAN tag */ if (pkt_dev->nr_labels) - protocol = __constant_htons(ETH_P_MPLS_UC); + protocol = htons(ETH_P_MPLS_UC); if (pkt_dev->vlan_id != 0xffff) - protocol = __constant_htons(ETH_P_8021Q); + protocol = htons(ETH_P_8021Q); /* Update any of the values, used when we're incrementing various * fields. @@ -2690,24 +2680,28 @@ static struct sk_buff *fill_packet_ipv6( mpls_push(mpls, pkt_dev); if (pkt_dev->vlan_id != 0xffff) { - if(pkt_dev->svlan_id != 0xffff) { + if (pkt_dev->svlan_id != 0xffff) { svlan_tci = (__be16 *)skb_put(skb, sizeof(__be16)); *svlan_tci = build_tci(pkt_dev->svlan_id, pkt_dev->svlan_cfi, pkt_dev->svlan_p); svlan_encapsulated_proto = (__be16 *)skb_put(skb, sizeof(__be16)); - *svlan_encapsulated_proto = __constant_htons(ETH_P_8021Q); + *svlan_encapsulated_proto = htons(ETH_P_8021Q); } vlan_tci = (__be16 *)skb_put(skb, sizeof(__be16)); *vlan_tci = build_tci(pkt_dev->vlan_id, pkt_dev->vlan_cfi, pkt_dev->vlan_p); vlan_encapsulated_proto = (__be16 *)skb_put(skb, sizeof(__be16)); - *vlan_encapsulated_proto = __constant_htons(ETH_P_IPV6); + *vlan_encapsulated_proto = htons(ETH_P_IPV6); } - iph = (struct ipv6hdr *)skb_put(skb, sizeof(struct ipv6hdr)); - udph = (struct udphdr *)skb_put(skb, sizeof(struct udphdr)); + skb->network_header = skb->tail; + skb->transport_header = skb->network_header + sizeof(struct ipv6hdr); + skb_put(skb, sizeof(struct ipv6hdr) + sizeof(struct udphdr)); + + iph = ipv6_hdr(skb); + udph = udp_hdr(skb); memcpy(eth, pkt_dev->hh, 12); *(__be16 *) & eth[12] = protocol; @@ -2729,7 +2723,7 @@ static struct sk_buff *fill_packet_ipv6( udph->len = htons(datalen + sizeof(struct udphdr)); udph->check = 0; /* No checksum */ - *(__be32 *) iph = __constant_htonl(0x60000000); /* Version + flow */ + *(__be32 *) iph = htonl(0x60000000); /* Version + flow */ if (pkt_dev->traffic_class) { /* Version + traffic class + flow (0) */ @@ -2744,13 +2738,12 @@ static struct sk_buff *fill_packet_ipv6( ipv6_addr_copy(&iph->daddr, &pkt_dev->cur_in6_daddr); ipv6_addr_copy(&iph->saddr, &pkt_dev->cur_in6_saddr); - skb->mac.raw = ((u8 *) iph) - 14 - pkt_dev->nr_labels*sizeof(u32) - - VLAN_TAG_SIZE(pkt_dev) - SVLAN_TAG_SIZE(pkt_dev); + skb->mac_header = (skb->network_header - ETH_HLEN - + pkt_dev->nr_labels * sizeof(u32) - + VLAN_TAG_SIZE(pkt_dev) - SVLAN_TAG_SIZE(pkt_dev)); skb->protocol = protocol; skb->dev = odev; skb->pkt_type = PACKET_HOST; - skb->nh.ipv6h = iph; - skb->h.uh = udph; if (pkt_dev->nfrags <= 0) pgh = (struct pktgen_hdr *)skb_put(skb, datalen); @@ -2848,7 +2841,7 @@ static void pktgen_run(struct pktgen_thr struct pktgen_dev *pkt_dev; int started = 0; - PG_DEBUG(printk("pktgen: entering pktgen_run. %p\n", t)); + pr_debug("pktgen: entering pktgen_run. %p\n", t); if_lock(t); list_for_each_entry(pkt_dev, &t->if_list, list) { @@ -2880,7 +2873,7 @@ static void pktgen_stop_all_threads_ifs( { struct pktgen_thread *t; - PG_DEBUG(printk("pktgen: entering pktgen_stop_all_threads_ifs.\n")); + pr_debug("pktgen: entering pktgen_stop_all_threads_ifs.\n"); mutex_lock(&pktgen_thread_lock); @@ -2948,7 +2941,7 @@ static void pktgen_run_all_threads(void) { struct pktgen_thread *t; - PG_DEBUG(printk("pktgen: entering pktgen_run_all_threads.\n")); + pr_debug("pktgen: entering pktgen_run_all_threads.\n"); mutex_lock(&pktgen_thread_lock); @@ -3006,7 +2999,7 @@ static int pktgen_stop_device(struct pkt if (!pkt_dev->running) { printk("pktgen: interface: %s is already stopped\n", - pkt_dev->ifname); + pkt_dev->odev->name); return -EINVAL; } @@ -3040,7 +3033,7 @@ static void pktgen_stop(struct pktgen_th { struct pktgen_dev *pkt_dev; - PG_DEBUG(printk("pktgen: entering pktgen_stop\n")); + pr_debug("pktgen: entering pktgen_stop\n"); if_lock(t); @@ -3064,7 +3057,7 @@ static void pktgen_rem_one_if(struct pkt struct list_head *q, *n; struct pktgen_dev *cur; - PG_DEBUG(printk("pktgen: entering pktgen_rem_one_if\n")); + pr_debug("pktgen: entering pktgen_rem_one_if\n"); if_lock(t); @@ -3093,7 +3086,7 @@ static void pktgen_rem_all_ifs(struct pk /* Remove all devices, free mem */ - PG_DEBUG(printk("pktgen: entering pktgen_rem_all_ifs\n")); + pr_debug("pktgen: entering pktgen_rem_all_ifs\n"); if_lock(t); list_for_each_safe(q, n, &t->if_list) { @@ -3276,7 +3269,7 @@ static int pktgen_thread_worker(void *ar t->pid = current->pid; - PG_DEBUG(printk("pktgen: starting pktgen/%d: pid=%d\n", cpu, current->pid)); + pr_debug("pktgen: starting pktgen/%d: pid=%d\n", cpu, current->pid); max_before_softirq = t->max_before_softirq; @@ -3339,13 +3332,13 @@ static int pktgen_thread_worker(void *ar set_current_state(TASK_INTERRUPTIBLE); } - PG_DEBUG(printk("pktgen: %s stopping all device\n", t->tsk->comm)); + pr_debug("pktgen: %s stopping all device\n", t->tsk->comm); pktgen_stop(t); - PG_DEBUG(printk("pktgen: %s removing all device\n", t->tsk->comm)); + pr_debug("pktgen: %s removing all device\n", t->tsk->comm); pktgen_rem_all_ifs(t); - PG_DEBUG(printk("pktgen: %s removing thread.\n", t->tsk->comm)); + pr_debug("pktgen: %s removing thread.\n", t->tsk->comm); pktgen_rem_thread(t); return 0; @@ -3358,13 +3351,13 @@ static struct pktgen_dev *pktgen_find_de if_lock(t); list_for_each_entry(p, &t->if_list, list) - if (strncmp(p->ifname, ifname, IFNAMSIZ) == 0) { + if (strncmp(p->odev->name, ifname, IFNAMSIZ) == 0) { pkt_dev = p; break; } if_unlock(t); - PG_DEBUG(printk("pktgen: find_dev(%s) returning %p\n", ifname, pkt_dev)); + pr_debug("pktgen: find_dev(%s) returning %p\n", ifname, pkt_dev); return pkt_dev; } @@ -3399,7 +3392,7 @@ out: static int pktgen_add_device(struct pktgen_thread *t, const char *ifname) { struct pktgen_dev *pkt_dev; - struct proc_dir_entry *pe; + int err; /* We don't allow a device to be on several threads */ @@ -3441,29 +3434,28 @@ static int pktgen_add_device(struct pktg pkt_dev->svlan_cfi = 0; pkt_dev->svlan_id = 0xffff; - strncpy(pkt_dev->ifname, ifname, IFNAMSIZ); + err = pktgen_setup_dev(pkt_dev, ifname); + if (err) + goto out1; - if (!pktgen_setup_dev(pkt_dev)) { - printk("pktgen: ERROR: pktgen_setup_dev failed.\n"); - if (pkt_dev->flows) - vfree(pkt_dev->flows); - kfree(pkt_dev); - return -ENODEV; - } - - pe = create_proc_entry(ifname, 0600, pg_proc_dir); - if (!pe) { + pkt_dev->entry = create_proc_entry(ifname, 0600, pg_proc_dir); + if (!pkt_dev->entry) { printk("pktgen: cannot create %s/%s procfs entry.\n", PG_PROC_DIR, ifname); - if (pkt_dev->flows) - vfree(pkt_dev->flows); - kfree(pkt_dev); - return -EINVAL; + err = -EINVAL; + goto out2; } - pe->proc_fops = &pktgen_if_fops; - pe->data = pkt_dev; + pkt_dev->entry->proc_fops = &pktgen_if_fops; + pkt_dev->entry->data = pkt_dev; return add_dev_to_thread(t, pkt_dev); +out2: + dev_put(pkt_dev->odev); +out1: + if (pkt_dev->flows) + vfree(pkt_dev->flows); + kfree(pkt_dev); + return err; } static int __init pktgen_create_thread(int cpu) @@ -3533,7 +3525,7 @@ static int pktgen_remove_device(struct p struct pktgen_dev *pkt_dev) { - PG_DEBUG(printk("pktgen: remove_device pkt_dev=%p\n", pkt_dev)); + pr_debug("pktgen: remove_device pkt_dev=%p\n", pkt_dev); if (pkt_dev->running) { printk("pktgen:WARNING: trying to remove a running interface, stopping it now.\n"); @@ -3551,9 +3543,8 @@ static int pktgen_remove_device(struct p _rem_dev_from_if_list(t, pkt_dev); - /* Clean up proc file system */ - - remove_proc_entry(pkt_dev->ifname, pg_proc_dir); + if (pkt_dev->entry) + remove_proc_entry(pkt_dev->entry->name, pg_proc_dir); if (pkt_dev->flows) vfree(pkt_dev->flows); diff -puN net/core/rtnetlink.c~git-net net/core/rtnetlink.c --- a/net/core/rtnetlink.c~git-net +++ a/net/core/rtnetlink.c @@ -50,11 +50,13 @@ #include #include #include -#include -#ifdef CONFIG_NET_WIRELESS_RTNETLINK -#include -#include -#endif /* CONFIG_NET_WIRELESS_RTNETLINK */ +#include + +struct rtnl_link +{ + rtnl_doit_func doit; + rtnl_dumpit_func dumpit; +}; static DEFINE_MUTEX(rtnl_mutex); static struct sock *rtnl; @@ -95,7 +97,151 @@ int rtattr_parse(struct rtattr *tb[], in return 0; } -struct rtnetlink_link * rtnetlink_links[NPROTO]; +struct rtnl_link *rtnl_msg_handlers[NPROTO]; + +static inline int rtm_msgindex(int msgtype) +{ + int msgindex = msgtype - RTM_BASE; + + /* + * msgindex < 0 implies someone tried to register a netlink + * control code. msgindex >= RTM_NR_MSGTYPES may indicate that + * the message type has not been added to linux/rtnetlink.h + */ + BUG_ON(msgindex < 0 || msgindex >= RTM_NR_MSGTYPES); + + return msgindex; +} + +static rtnl_doit_func rtnl_get_doit(int protocol, int msgindex) +{ + struct rtnl_link *tab; + + tab = rtnl_msg_handlers[protocol]; + if (tab == NULL || tab[msgindex].doit == NULL) + tab = rtnl_msg_handlers[PF_UNSPEC]; + + return tab ? tab[msgindex].doit : NULL; +} + +static rtnl_dumpit_func rtnl_get_dumpit(int protocol, int msgindex) +{ + struct rtnl_link *tab; + + tab = rtnl_msg_handlers[protocol]; + if (tab == NULL || tab[msgindex].dumpit == NULL) + tab = rtnl_msg_handlers[PF_UNSPEC]; + + return tab ? tab[msgindex].dumpit : NULL; +} + +/** + * __rtnl_register - Register a rtnetlink message type + * @protocol: Protocol family or PF_UNSPEC + * @msgtype: rtnetlink message type + * @doit: Function pointer called for each request message + * @dumpit: Function pointer called for each dump request (NLM_F_DUMP) message + * + * Registers the specified function pointers (at least one of them has + * to be non-NULL) to be called whenever a request message for the + * specified protocol family and message type is received. + * + * The special protocol family PF_UNSPEC may be used to define fallback + * function pointers for the case when no entry for the specific protocol + * family exists. + * + * Returns 0 on success or a negative error code. + */ +int __rtnl_register(int protocol, int msgtype, + rtnl_doit_func doit, rtnl_dumpit_func dumpit) +{ + struct rtnl_link *tab; + int msgindex; + + BUG_ON(protocol < 0 || protocol >= NPROTO); + msgindex = rtm_msgindex(msgtype); + + tab = rtnl_msg_handlers[protocol]; + if (tab == NULL) { + tab = kcalloc(RTM_NR_MSGTYPES, sizeof(*tab), GFP_KERNEL); + if (tab == NULL) + return -ENOBUFS; + + rtnl_msg_handlers[protocol] = tab; + } + + if (doit) + tab[msgindex].doit = doit; + + if (dumpit) + tab[msgindex].dumpit = dumpit; + + return 0; +} + +EXPORT_SYMBOL_GPL(__rtnl_register); + +/** + * rtnl_register - Register a rtnetlink message type + * + * Identical to __rtnl_register() but panics on failure. This is useful + * as failure of this function is very unlikely, it can only happen due + * to lack of memory when allocating the chain to store all message + * handlers for a protocol. Meant for use in init functions where lack + * of memory implies no sense in continueing. + */ +void rtnl_register(int protocol, int msgtype, + rtnl_doit_func doit, rtnl_dumpit_func dumpit) +{ + if (__rtnl_register(protocol, msgtype, doit, dumpit) < 0) + panic("Unable to register rtnetlink message handler, " + "protocol = %d, message type = %d\n", + protocol, msgtype); +} + +EXPORT_SYMBOL_GPL(rtnl_register); + +/** + * rtnl_unregister - Unregister a rtnetlink message type + * @protocol: Protocol family or PF_UNSPEC + * @msgtype: rtnetlink message type + * + * Returns 0 on success or a negative error code. + */ +int rtnl_unregister(int protocol, int msgtype) +{ + int msgindex; + + BUG_ON(protocol < 0 || protocol >= NPROTO); + msgindex = rtm_msgindex(msgtype); + + if (rtnl_msg_handlers[protocol] == NULL) + return -ENOENT; + + rtnl_msg_handlers[protocol][msgindex].doit = NULL; + rtnl_msg_handlers[protocol][msgindex].dumpit = NULL; + + return 0; +} + +EXPORT_SYMBOL_GPL(rtnl_unregister); + +/** + * rtnl_unregister_all - Unregister all rtnetlink message type of a protocol + * @protocol : Protocol family or PF_UNSPEC + * + * Identical to calling rtnl_unregster() for all registered message types + * of a certain protocol family. + */ +void rtnl_unregister_all(int protocol) +{ + BUG_ON(protocol < 0 || protocol >= NPROTO); + + kfree(rtnl_msg_handlers[protocol]); + rtnl_msg_handlers[protocol] = NULL; +} + +EXPORT_SYMBOL_GPL(rtnl_unregister_all); static const int rtm_min[RTM_NR_FAMILIES] = { @@ -249,7 +395,7 @@ static void set_operstate(struct net_dev operstate == IF_OPER_UNKNOWN) operstate = IF_OPER_DORMANT; break; - }; + } if (dev->operstate != operstate) { write_lock_bh(&dev_base_lock); @@ -393,7 +539,6 @@ static int rtnl_dump_ifinfo(struct sk_bu int s_idx = cb->args[0]; struct net_device *dev; - read_lock(&dev_base_lock); for (dev=dev_base, idx=0; dev; dev = dev->next, idx++) { if (idx < s_idx) continue; @@ -402,7 +547,6 @@ static int rtnl_dump_ifinfo(struct sk_bu cb->nlh->nlmsg_seq, 0, NLM_F_MULTI) <= 0) break; } - read_unlock(&dev_base_lock); cb->args[0] = idx; return skb->len; @@ -536,17 +680,6 @@ static int rtnl_setlink(struct sk_buff * modified = 1; } -#ifdef CONFIG_NET_WIRELESS_RTNETLINK - if (tb[IFLA_WIRELESS]) { - /* Call Wireless Extensions. - * Various stuff checked in there... */ - err = wireless_rtnetlink_set(dev, nla_data(tb[IFLA_WIRELESS]), - nla_len(tb[IFLA_WIRELESS])); - if (err < 0) - goto errout_dev; - } -#endif /* CONFIG_NET_WIRELESS_RTNETLINK */ - if (tb[IFLA_BROADCAST]) { nla_memcpy(dev->broadcast, tb[IFLA_BROADCAST], dev->addr_len); send_addr_notify = 1; @@ -610,22 +743,6 @@ static int rtnl_getlink(struct sk_buff * } else return -EINVAL; - -#ifdef CONFIG_NET_WIRELESS_RTNETLINK - if (tb[IFLA_WIRELESS]) { - /* Call Wireless Extensions. We need to know the size before - * we can alloc. Various stuff checked in there... */ - err = wireless_rtnetlink_get(dev, nla_data(tb[IFLA_WIRELESS]), - nla_len(tb[IFLA_WIRELESS]), - &iw_buf, &iw_buf_len); - if (err < 0) - goto errout; - - /* Payload is at an offset in buffer */ - iw = iw_buf + IW_EV_POINT_OFF; - } -#endif /* CONFIG_NET_WIRELESS_RTNETLINK */ - nskb = nlmsg_new(if_nlmsg_size(iw_buf_len), GFP_KERNEL); if (nskb == NULL) { err = -ENOBUFS; @@ -648,7 +765,7 @@ errout: return err; } -static int rtnl_dump_all(struct sk_buff *skb, struct netlink_callback *cb) +int rtnl_dump_all(struct sk_buff *skb, struct netlink_callback *cb) { int idx; int s_idx = cb->family; @@ -659,12 +776,12 @@ static int rtnl_dump_all(struct sk_buff int type = cb->nlh->nlmsg_type-RTM_BASE; if (idx < s_idx || idx == PF_PACKET) continue; - if (rtnetlink_links[idx] == NULL || - rtnetlink_links[idx][type].dumpit == NULL) + if (rtnl_msg_handlers[idx] == NULL || + rtnl_msg_handlers[idx][type].dumpit == NULL) continue; if (idx > s_idx) memset(&cb->args[0], 0, sizeof(cb->args)); - if (rtnetlink_links[idx][type].dumpit(skb, cb)) + if (rtnl_msg_handlers[idx][type].dumpit(skb, cb)) break; } cb->family = idx; @@ -672,6 +789,8 @@ static int rtnl_dump_all(struct sk_buff return skb->len; } +EXPORT_SYMBOL_GPL(rtnl_dump_all); + void rtmsg_ifinfo(int type, struct net_device *dev, unsigned change) { struct sk_buff *skb; @@ -700,30 +819,18 @@ static int rtattr_max; /* Process one rtnetlink message. */ -static __inline__ int -rtnetlink_rcv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, int *errp) +static int rtnetlink_rcv_msg(struct sk_buff *skb, struct nlmsghdr *nlh) { - struct rtnetlink_link *link; - struct rtnetlink_link *link_tab; + rtnl_doit_func doit; int sz_idx, kind; int min_len; int family; int type; int err; - /* Only requests are handled by kernel now */ - if (!(nlh->nlmsg_flags&NLM_F_REQUEST)) - return 0; - type = nlh->nlmsg_type; - - /* A control message: ignore them */ - if (type < RTM_BASE) - return 0; - - /* Unknown message: reply with EINVAL */ if (type > RTM_MAX) - goto err_inval; + return -EOPNOTSUPP; type -= RTM_BASE; @@ -732,45 +839,33 @@ rtnetlink_rcv_msg(struct sk_buff *skb, s return 0; family = ((struct rtgenmsg*)NLMSG_DATA(nlh))->rtgen_family; - if (family >= NPROTO) { - *errp = -EAFNOSUPPORT; - return -1; - } - - link_tab = rtnetlink_links[family]; - if (link_tab == NULL) - link_tab = rtnetlink_links[PF_UNSPEC]; - link = &link_tab[type]; + if (family >= NPROTO) + return -EAFNOSUPPORT; sz_idx = type>>2; kind = type&3; - if (kind != 2 && security_netlink_recv(skb, CAP_NET_ADMIN)) { - *errp = -EPERM; - return -1; - } + if (kind != 2 && security_netlink_recv(skb, CAP_NET_ADMIN)) + return -EPERM; if (kind == 2 && nlh->nlmsg_flags&NLM_F_DUMP) { - if (link->dumpit == NULL) - link = &(rtnetlink_links[PF_UNSPEC][type]); - - if (link->dumpit == NULL) - goto err_inval; - - if ((*errp = netlink_dump_start(rtnl, skb, nlh, - link->dumpit, NULL)) != 0) { - return -1; - } + rtnl_dumpit_func dumpit; - netlink_queue_skip(nlh, skb); - return -1; + dumpit = rtnl_get_dumpit(family, type); + if (dumpit == NULL) + return -EOPNOTSUPP; + + __rtnl_unlock(); + err = netlink_dump_start(rtnl, skb, nlh, dumpit, NULL); + rtnl_lock(); + return err; } memset(rta_buf, 0, (rtattr_max * sizeof(struct rtattr *))); min_len = rtm_min[sz_idx]; if (nlh->nlmsg_len < min_len) - goto err_inval; + return -EINVAL; if (nlh->nlmsg_len > min_len) { int attrlen = nlh->nlmsg_len - NLMSG_ALIGN(min_len); @@ -780,25 +875,18 @@ rtnetlink_rcv_msg(struct sk_buff *skb, s unsigned flavor = attr->rta_type; if (flavor) { if (flavor > rta_max[sz_idx]) - goto err_inval; + return -EINVAL; rta_buf[flavor-1] = attr; } attr = RTA_NEXT(attr, attrlen); } } - if (link->doit == NULL) - link = &(rtnetlink_links[PF_UNSPEC][type]); - if (link->doit == NULL) - goto err_inval; - err = link->doit(skb, nlh, (void *)&rta_buf[0]); + doit = rtnl_get_doit(family, type); + if (doit == NULL) + return -EOPNOTSUPP; - *errp = err; - return err; - -err_inval: - *errp = -EINVAL; - return -1; + return doit(skb, nlh, (void *)&rta_buf[0]); } static void rtnetlink_rcv(struct sock *sk, int len) @@ -814,25 +902,6 @@ static void rtnetlink_rcv(struct sock *s } while (qlen); } -static struct rtnetlink_link link_rtnetlink_table[RTM_NR_MSGTYPES] = -{ - [RTM_GETLINK - RTM_BASE] = { .doit = rtnl_getlink, - .dumpit = rtnl_dump_ifinfo }, - [RTM_SETLINK - RTM_BASE] = { .doit = rtnl_setlink }, - [RTM_GETADDR - RTM_BASE] = { .dumpit = rtnl_dump_all }, - [RTM_GETROUTE - RTM_BASE] = { .dumpit = rtnl_dump_all }, - [RTM_NEWNEIGH - RTM_BASE] = { .doit = neigh_add }, - [RTM_DELNEIGH - RTM_BASE] = { .doit = neigh_delete }, - [RTM_GETNEIGH - RTM_BASE] = { .dumpit = neigh_dump_info }, -#ifdef CONFIG_FIB_RULES - [RTM_NEWRULE - RTM_BASE] = { .doit = fib_nl_newrule }, - [RTM_DELRULE - RTM_BASE] = { .doit = fib_nl_delrule }, -#endif - [RTM_GETRULE - RTM_BASE] = { .dumpit = rtnl_dump_all }, - [RTM_GETNEIGHTBL - RTM_BASE] = { .dumpit = neightbl_dump_info }, - [RTM_SETNEIGHTBL - RTM_BASE] = { .doit = neightbl_set }, -}; - static int rtnetlink_event(struct notifier_block *this, unsigned long event, void *ptr) { struct net_device *dev = ptr; @@ -874,19 +943,22 @@ void __init rtnetlink_init(void) panic("rtnetlink_init: cannot allocate rta_buf\n"); rtnl = netlink_kernel_create(NETLINK_ROUTE, RTNLGRP_MAX, rtnetlink_rcv, - THIS_MODULE); + &rtnl_mutex, THIS_MODULE); if (rtnl == NULL) panic("rtnetlink_init: cannot initialize rtnetlink\n"); netlink_set_nonroot(NETLINK_ROUTE, NL_NONROOT_RECV); register_netdevice_notifier(&rtnetlink_dev_notifier); - rtnetlink_links[PF_UNSPEC] = link_rtnetlink_table; - rtnetlink_links[PF_PACKET] = link_rtnetlink_table; + + rtnl_register(PF_UNSPEC, RTM_GETLINK, rtnl_getlink, rtnl_dump_ifinfo); + rtnl_register(PF_UNSPEC, RTM_SETLINK, rtnl_setlink, NULL); + + rtnl_register(PF_UNSPEC, RTM_GETADDR, NULL, rtnl_dump_all); + rtnl_register(PF_UNSPEC, RTM_GETROUTE, NULL, rtnl_dump_all); } EXPORT_SYMBOL(__rta_fill); EXPORT_SYMBOL(rtattr_strlcpy); EXPORT_SYMBOL(rtattr_parse); -EXPORT_SYMBOL(rtnetlink_links); EXPORT_SYMBOL(rtnetlink_put_metrics); EXPORT_SYMBOL(rtnl_lock); EXPORT_SYMBOL(rtnl_trylock); diff -puN net/core/skbuff.c~git-net net/core/skbuff.c --- a/net/core/skbuff.c~git-net +++ a/net/core/skbuff.c @@ -55,6 +55,7 @@ #include #include #include +#include #include #include @@ -87,8 +88,9 @@ static struct kmem_cache *skbuff_fclone_ void skb_over_panic(struct sk_buff *skb, int sz, void *here) { printk(KERN_EMERG "skb_over_panic: text:%p len:%d put:%d head:%p " - "data:%p tail:%p end:%p dev:%s\n", - here, skb->len, sz, skb->head, skb->data, skb->tail, skb->end, + "data:%p tail:%#lx end:%#lx dev:%s\n", + here, skb->len, sz, skb->head, skb->data, + (unsigned long)skb->tail, (unsigned long)skb->end, skb->dev ? skb->dev->name : ""); BUG(); } @@ -105,8 +107,9 @@ void skb_over_panic(struct sk_buff *skb, void skb_under_panic(struct sk_buff *skb, int sz, void *here) { printk(KERN_EMERG "skb_under_panic: text:%p len:%d put:%d head:%p " - "data:%p tail:%p end:%p dev:%s\n", - here, skb->len, sz, skb->head, skb->data, skb->tail, skb->end, + "data:%p tail:%#lx end:%#lx dev:%s\n", + here, skb->len, sz, skb->head, skb->data, + (unsigned long)skb->tail, (unsigned long)skb->end, skb->dev ? skb->dev->name : ""); BUG(); } @@ -155,20 +158,22 @@ struct sk_buff *__alloc_skb(unsigned int if (!skb) goto out; - /* Get the DATA. Size must match skb_add_mtu(). */ size = SKB_DATA_ALIGN(size); data = kmalloc_node_track_caller(size + sizeof(struct skb_shared_info), gfp_mask, node); if (!data) goto nodata; - memset(skb, 0, offsetof(struct sk_buff, truesize)); + /* + * See comment in sk_buff definition, just before the 'tail' member + */ + memset(skb, 0, offsetof(struct sk_buff, tail)); skb->truesize = size + sizeof(struct sk_buff); atomic_set(&skb->users, 1); skb->head = data; skb->data = data; - skb->tail = data; - skb->end = data + size; + skb_reset_tail_pointer(skb); + skb->end = skb->tail + size; /* make sure we initialize shinfo sequentially */ shinfo = skb_shinfo(skb); atomic_set(&shinfo->dataref, 1); @@ -299,7 +304,7 @@ void kfree_skbmem(struct sk_buff *skb) if (atomic_dec_and_test(fclone_ref)) kmem_cache_free(skbuff_fclone_cache, other); break; - }; + } } /** @@ -321,15 +326,13 @@ void __kfree_skb(struct sk_buff *skb) WARN_ON(in_irq()); skb->destructor(skb); } -#ifdef CONFIG_NETFILTER - nf_conntrack_put(skb->nfct); #if defined(CONFIG_NF_CONNTRACK) || defined(CONFIG_NF_CONNTRACK_MODULE) + nf_conntrack_put(skb->nfct); nf_conntrack_put_reasm(skb->nfct_reasm); #endif #ifdef CONFIG_BRIDGE_NETFILTER nf_bridge_put(skb->nf_bridge); #endif -#endif /* XXX: IS this still necessary? - JHS */ #ifdef CONFIG_NET_SCHED skb->tc_index = 0; @@ -396,9 +399,9 @@ struct sk_buff *skb_clone(struct sk_buff n->sk = NULL; C(tstamp); C(dev); - C(h); - C(nh); - C(mac); + C(transport_header); + C(network_header); + C(mac_header); C(dst); dst_clone(skb->dst); C(sp); @@ -422,19 +425,7 @@ struct sk_buff *skb_clone(struct sk_buff C(protocol); n->destructor = NULL; C(mark); -#ifdef CONFIG_NETFILTER - C(nfct); - nf_conntrack_get(skb->nfct); - C(nfctinfo); -#if defined(CONFIG_NF_CONNTRACK) || defined(CONFIG_NF_CONNTRACK_MODULE) - C(nfct_reasm); - nf_conntrack_get_reasm(skb->nfct_reasm); -#endif -#ifdef CONFIG_BRIDGE_NETFILTER - C(nf_bridge); - nf_bridge_get(skb->nf_bridge); -#endif -#endif /*CONFIG_NETFILTER*/ + __nf_copy(n, skb); #ifdef CONFIG_NET_SCHED C(tc_index); #ifdef CONFIG_NET_CLS_ACT @@ -460,11 +451,12 @@ struct sk_buff *skb_clone(struct sk_buff static void copy_skb_header(struct sk_buff *new, const struct sk_buff *old) { +#ifndef NET_SKBUFF_DATA_USES_OFFSET /* * Shift between the two data areas in bytes */ unsigned long offset = new->data - old->data; - +#endif new->sk = NULL; new->dev = old->dev; new->priority = old->priority; @@ -473,9 +465,15 @@ static void copy_skb_header(struct sk_bu #ifdef CONFIG_INET new->sp = secpath_get(old->sp); #endif - new->h.raw = old->h.raw + offset; - new->nh.raw = old->nh.raw + offset; - new->mac.raw = old->mac.raw + offset; + new->transport_header = old->transport_header; + new->network_header = old->network_header; + new->mac_header = old->mac_header; +#ifndef NET_SKBUFF_DATA_USES_OFFSET + /* {transport,network,mac}_header are relative to skb->head */ + new->transport_header += offset; + new->network_header += offset; + new->mac_header += offset; +#endif memcpy(new->cb, old->cb, sizeof(old->cb)); new->local_df = old->local_df; new->fclone = SKB_FCLONE_UNAVAILABLE; @@ -483,22 +481,10 @@ static void copy_skb_header(struct sk_bu new->tstamp = old->tstamp; new->destructor = NULL; new->mark = old->mark; -#ifdef CONFIG_NETFILTER - new->nfct = old->nfct; - nf_conntrack_get(old->nfct); - new->nfctinfo = old->nfctinfo; -#if defined(CONFIG_NF_CONNTRACK) || defined(CONFIG_NF_CONNTRACK_MODULE) - new->nfct_reasm = old->nfct_reasm; - nf_conntrack_get_reasm(old->nfct_reasm); -#endif + __nf_copy(new, old); #if defined(CONFIG_IP_VS) || defined(CONFIG_IP_VS_MODULE) new->ipvs_property = old->ipvs_property; #endif -#ifdef CONFIG_BRIDGE_NETFILTER - new->nf_bridge = old->nf_bridge; - nf_bridge_get(old->nf_bridge); -#endif -#endif #ifdef CONFIG_NET_SCHED #ifdef CONFIG_NET_CLS_ACT new->tc_verd = old->tc_verd; @@ -535,8 +521,12 @@ struct sk_buff *skb_copy(const struct sk /* * Allocate the copy buffer */ - struct sk_buff *n = alloc_skb(skb->end - skb->head + skb->data_len, - gfp_mask); + struct sk_buff *n; +#ifdef NET_SKBUFF_DATA_USES_OFFSET + n = alloc_skb(skb->end + skb->data_len, gfp_mask); +#else + n = alloc_skb(skb->end - skb->head + skb->data_len, gfp_mask); +#endif if (!n) return NULL; @@ -573,8 +563,12 @@ struct sk_buff *pskb_copy(struct sk_buff /* * Allocate the copy buffer */ - struct sk_buff *n = alloc_skb(skb->end - skb->head, gfp_mask); - + struct sk_buff *n; +#ifdef NET_SKBUFF_DATA_USES_OFFSET + n = alloc_skb(skb->end, gfp_mask); +#else + n = alloc_skb(skb->end - skb->head, gfp_mask); +#endif if (!n) goto out; @@ -583,7 +577,7 @@ struct sk_buff *pskb_copy(struct sk_buff /* Set the tail pointer and length */ skb_put(n, skb_headlen(skb)); /* Copy the bytes */ - memcpy(n->data, skb->data, n->len); + skb_copy_from_linear_data(skb, n->data, n->len); n->csum = skb->csum; n->ip_summed = skb->ip_summed; @@ -632,7 +626,11 @@ int pskb_expand_head(struct sk_buff *skb { int i; u8 *data; +#ifdef NET_SKBUFF_DATA_USES_OFFSET + int size = nhead + skb->end + ntail; +#else int size = nhead + (skb->end - skb->head) + ntail; +#endif long off; if (skb_shared(skb)) @@ -646,8 +644,14 @@ int pskb_expand_head(struct sk_buff *skb /* Copy only real data... and, alas, header. This should be * optimized for the cases when header is void. */ - memcpy(data + nhead, skb->head, skb->tail - skb->head); - memcpy(data + size, skb->end, sizeof(struct skb_shared_info)); + memcpy(data + nhead, skb->head, +#ifdef NET_SKBUFF_DATA_USES_OFFSET + skb->tail); +#else + skb->tail - skb->head); +#endif + memcpy(data + size, skb_end_pointer(skb), + sizeof(struct skb_shared_info)); for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) get_page(skb_shinfo(skb)->frags[i].page); @@ -660,12 +664,18 @@ int pskb_expand_head(struct sk_buff *skb off = (data + nhead) - skb->head; skb->head = data; - skb->end = data + size; skb->data += off; - skb->tail += off; - skb->mac.raw += off; - skb->h.raw += off; - skb->nh.raw += off; +#ifdef NET_SKBUFF_DATA_USES_OFFSET + skb->end = size; + off = nhead; +#else + skb->end = skb->head + size; +#endif + /* {transport,network,mac}_header and tail are relative to skb->head */ + skb->tail += off; + skb->transport_header += off; + skb->network_header += off; + skb->mac_header += off; skb->cloned = 0; skb->nohdr = 0; atomic_set(&skb_shinfo(skb)->dataref, 1); @@ -726,7 +736,9 @@ struct sk_buff *skb_copy_expand(const st */ struct sk_buff *n = alloc_skb(newheadroom + skb->len + newtailroom, gfp_mask); + int oldheadroom = skb_headroom(skb); int head_copy_len, head_copy_off; + int off = 0; if (!n) return NULL; @@ -736,7 +748,7 @@ struct sk_buff *skb_copy_expand(const st /* Set the tail pointer and length */ skb_put(n, skb->len); - head_copy_len = skb_headroom(skb); + head_copy_len = oldheadroom; head_copy_off = 0; if (newheadroom <= head_copy_len) head_copy_len = newheadroom; @@ -750,6 +762,13 @@ struct sk_buff *skb_copy_expand(const st copy_skb_header(n, skb); +#ifdef NET_SKBUFF_DATA_USES_OFFSET + off = newheadroom - oldheadroom; +#endif + n->transport_header += off; + n->network_header += off; + n->mac_header += off; + return n; } @@ -877,7 +896,7 @@ done: } else { skb->len = len; skb->data_len = 0; - skb->tail = skb->data + len; + skb_set_tail_pointer(skb, len); } return 0; @@ -922,7 +941,7 @@ unsigned char *__pskb_pull_tail(struct s return NULL; } - if (skb_copy_bits(skb, skb_headlen(skb), skb->tail, delta)) + if (skb_copy_bits(skb, skb_headlen(skb), skb_tail_pointer(skb), delta)) BUG(); /* Optimization: no fragments, no reasons to preestimate @@ -1018,7 +1037,7 @@ pull_pages: skb->tail += delta; skb->data_len -= delta; - return skb->tail; + return skb_tail_pointer(skb); } /* Copy some data bits from skb to kernel buffer. */ @@ -1035,7 +1054,7 @@ int skb_copy_bits(const struct sk_buff * if ((copy = start - offset) > 0) { if (copy > len) copy = len; - memcpy(to, skb->data + offset, copy); + skb_copy_from_linear_data_offset(skb, offset, to, copy); if ((len -= copy) == 0) return 0; offset += copy; @@ -1110,7 +1129,7 @@ fault: * traversing fragment lists and such. */ -int skb_store_bits(const struct sk_buff *skb, int offset, void *from, int len) +int skb_store_bits(struct sk_buff *skb, int offset, const void *from, int len) { int i, copy; int start = skb_headlen(skb); @@ -1121,7 +1140,7 @@ int skb_store_bits(const struct sk_buff if ((copy = start - offset) > 0) { if (copy > len) copy = len; - memcpy(skb->data + offset, from, copy); + skb_copy_to_linear_data_offset(skb, offset, from, copy); if ((len -= copy) == 0) return 0; offset += copy; @@ -1348,13 +1367,13 @@ void skb_copy_and_csum_dev(const struct long csstart; if (skb->ip_summed == CHECKSUM_PARTIAL) - csstart = skb->h.raw - skb->data; + csstart = skb->csum_start - skb_headroom(skb); else csstart = skb_headlen(skb); BUG_ON(csstart > skb_headlen(skb)); - memcpy(to, skb->data, csstart); + skb_copy_from_linear_data(skb, to, csstart); csum = 0; if (csstart != skb->len) @@ -1522,27 +1541,14 @@ void skb_insert(struct sk_buff *old, str spin_unlock_irqrestore(&list->lock, flags); } -#if 0 -/* - * Tune the memory allocator for a new MTU size. - */ -void skb_add_mtu(int mtu) -{ - /* Must match allocation in alloc_skb */ - mtu = SKB_DATA_ALIGN(mtu) + sizeof(struct skb_shared_info); - - kmem_add_cache_size(mtu); -} -#endif - static inline void skb_split_inside_header(struct sk_buff *skb, struct sk_buff* skb1, const u32 len, const int pos) { int i; - memcpy(skb_put(skb1, pos - len), skb->data + len, pos - len); - + skb_copy_from_linear_data_offset(skb, len, skb_put(skb1, pos - len), + pos - len); /* And move data appendix as is. */ for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) skb_shinfo(skb1)->frags[i] = skb_shinfo(skb)->frags[i]; @@ -1553,7 +1559,7 @@ static inline void skb_split_inside_head skb1->len += skb1->data_len; skb->data_len = 0; skb->len = len; - skb->tail = skb->data + len; + skb_set_tail_pointer(skb, len); } static inline void skb_split_no_header(struct sk_buff *skb, @@ -1878,7 +1884,7 @@ struct sk_buff *skb_segment(struct sk_bu struct sk_buff *segs = NULL; struct sk_buff *tail = NULL; unsigned int mss = skb_shinfo(skb)->gso_size; - unsigned int doffset = skb->data - skb->mac.raw; + unsigned int doffset = skb->data - skb_mac_header(skb); unsigned int offset = doffset; unsigned int headroom; unsigned int len; @@ -1928,11 +1934,12 @@ struct sk_buff *skb_segment(struct sk_bu nskb->mac_len = skb->mac_len; skb_reserve(nskb, headroom); - nskb->mac.raw = nskb->data; - nskb->nh.raw = nskb->data + skb->mac_len; - nskb->h.raw = nskb->nh.raw + (skb->h.raw - skb->nh.raw); - memcpy(skb_put(nskb, doffset), skb->data, doffset); - + skb_reset_mac_header(nskb); + skb_set_network_header(nskb, skb->mac_len); + nskb->transport_header = (nskb->network_header + + skb_network_header_len(skb)); + skb_copy_from_linear_data(skb, skb_put(nskb, doffset), + doffset); if (!sg) { nskb->csum = skb_copy_and_csum_bits(skb, offset, skb_put(nskb, len), @@ -1945,7 +1952,8 @@ struct sk_buff *skb_segment(struct sk_bu nskb->ip_summed = CHECKSUM_PARTIAL; nskb->csum = skb->csum; - memcpy(skb_put(nskb, hsize), skb->data + offset, hsize); + skb_copy_from_linear_data_offset(skb, offset, + skb_put(nskb, hsize), hsize); while (pos < offset + len) { BUG_ON(i >= nfrags); @@ -2005,6 +2013,190 @@ void __init skb_init(void) NULL, NULL); } +/** + * skb_to_sgvec - Fill a scatter-gather list from a socket buffer + * @skb: Socket buffer containing the buffers to be mapped + * @sg: The scatter-gather list to map into + * @offset: The offset into the buffer's contents to start mapping + * @len: Length of buffer space to be mapped + * + * Fill the specified scatter-gather list with mappings/pointers into a + * region of the buffer space attached to a socket buffer. + */ +int +skb_to_sgvec(struct sk_buff *skb, struct scatterlist *sg, int offset, int len) +{ + int start = skb_headlen(skb); + int i, copy = start - offset; + int elt = 0; + + if (copy > 0) { + if (copy > len) + copy = len; + sg[elt].page = virt_to_page(skb->data + offset); + sg[elt].offset = (unsigned long)(skb->data + offset) % PAGE_SIZE; + sg[elt].length = copy; + elt++; + if ((len -= copy) == 0) + return elt; + offset += copy; + } + + for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { + int end; + + BUG_TRAP(start <= offset + len); + + end = start + skb_shinfo(skb)->frags[i].size; + if ((copy = end - offset) > 0) { + skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; + + if (copy > len) + copy = len; + sg[elt].page = frag->page; + sg[elt].offset = frag->page_offset+offset-start; + sg[elt].length = copy; + elt++; + if (!(len -= copy)) + return elt; + offset += copy; + } + start = end; + } + + if (skb_shinfo(skb)->frag_list) { + struct sk_buff *list = skb_shinfo(skb)->frag_list; + + for (; list; list = list->next) { + int end; + + BUG_TRAP(start <= offset + len); + + end = start + list->len; + if ((copy = end - offset) > 0) { + if (copy > len) + copy = len; + elt += skb_to_sgvec(list, sg+elt, offset - start, copy); + if ((len -= copy) == 0) + return elt; + offset += copy; + } + start = end; + } + } + BUG_ON(len); + return elt; +} + +/** + * skb_cow_data - Check that a socket buffer's data buffers are writable + * @skb: The socket buffer to check. + * @tailbits: Amount of trailing space to be added + * @trailer: Returned pointer to the skb where the @tailbits space begins + * + * Make sure that the data buffers attached to a socket buffer are + * writable. If they are not, private copies are made of the data buffers + * and the socket buffer is set to use these instead. + * + * If @tailbits is given, make sure that there is space to write @tailbits + * bytes of data beyond current end of socket buffer. @trailer will be + * set to point to the skb in which this space begins. + * + * The number of scatterlist elements required to completely map the + * COW'd and extended socket buffer will be returned. + */ +int skb_cow_data(struct sk_buff *skb, int tailbits, struct sk_buff **trailer) +{ + int copyflag; + int elt; + struct sk_buff *skb1, **skb_p; + + /* If skb is cloned or its head is paged, reallocate + * head pulling out all the pages (pages are considered not writable + * at the moment even if they are anonymous). + */ + if ((skb_cloned(skb) || skb_shinfo(skb)->nr_frags) && + __pskb_pull_tail(skb, skb_pagelen(skb)-skb_headlen(skb)) == NULL) + return -ENOMEM; + + /* Easy case. Most of packets will go this way. */ + if (!skb_shinfo(skb)->frag_list) { + /* A little of trouble, not enough of space for trailer. + * This should not happen, when stack is tuned to generate + * good frames. OK, on miss we reallocate and reserve even more + * space, 128 bytes is fair. */ + + if (skb_tailroom(skb) < tailbits && + pskb_expand_head(skb, 0, tailbits-skb_tailroom(skb)+128, GFP_ATOMIC)) + return -ENOMEM; + + /* Voila! */ + *trailer = skb; + return 1; + } + + /* Misery. We are in troubles, going to mincer fragments... */ + + elt = 1; + skb_p = &skb_shinfo(skb)->frag_list; + copyflag = 0; + + while ((skb1 = *skb_p) != NULL) { + int ntail = 0; + + /* The fragment is partially pulled by someone, + * this can happen on input. Copy it and everything + * after it. */ + + if (skb_shared(skb1)) + copyflag = 1; + + /* If the skb is the last, worry about trailer. */ + + if (skb1->next == NULL && tailbits) { + if (skb_shinfo(skb1)->nr_frags || + skb_shinfo(skb1)->frag_list || + skb_tailroom(skb1) < tailbits) + ntail = tailbits + 128; + } + + if (copyflag || + skb_cloned(skb1) || + ntail || + skb_shinfo(skb1)->nr_frags || + skb_shinfo(skb1)->frag_list) { + struct sk_buff *skb2; + + /* Fuck, we are miserable poor guys... */ + if (ntail == 0) + skb2 = skb_copy(skb1, GFP_ATOMIC); + else + skb2 = skb_copy_expand(skb1, + skb_headroom(skb1), + ntail, + GFP_ATOMIC); + if (unlikely(skb2 == NULL)) + return -ENOMEM; + + if (skb1->sk) + skb_set_owner_w(skb2, skb1->sk); + + /* Looking around. Are we still alive? + * OK, link new skb, drop old one */ + + skb2->next = skb1->next; + *skb_p = skb2; + kfree_skb(skb1); + skb1 = skb2; + } + elt++; + *trailer = skb1; + skb_p = &skb1->next; + } + + return elt; +} + EXPORT_SYMBOL(___pskb_trim); EXPORT_SYMBOL(__kfree_skb); EXPORT_SYMBOL(kfree_skb); @@ -2039,3 +2231,6 @@ EXPORT_SYMBOL(skb_seq_read); EXPORT_SYMBOL(skb_abort_seq_read); EXPORT_SYMBOL(skb_find_text); EXPORT_SYMBOL(skb_append_datato_frags); + +EXPORT_SYMBOL_GPL(skb_to_sgvec); +EXPORT_SYMBOL_GPL(skb_cow_data); diff -puN net/core/sock.c~git-net net/core/sock.c --- a/net/core/sock.c~git-net +++ a/net/core/sock.c @@ -361,8 +361,8 @@ int sock_setsockopt(struct socket *sock, } #endif - if(optlensk_reuse = valbool; - break; - case SO_TYPE: - case SO_ERROR: - ret = -ENOPROTOOPT; - break; - case SO_DONTROUTE: - if (valbool) - sock_set_flag(sk, SOCK_LOCALROUTE); - else - sock_reset_flag(sk, SOCK_LOCALROUTE); - break; - case SO_BROADCAST: - sock_valbool_flag(sk, SOCK_BROADCAST, valbool); - break; - case SO_SNDBUF: - /* Don't error on this BSD doesn't and if you think - about it this is right. Otherwise apps have to - play 'guess the biggest size' games. RCVBUF/SNDBUF - are treated in BSD as hints */ + switch(optname) { + case SO_DEBUG: + if (val && !capable(CAP_NET_ADMIN)) { + ret = -EACCES; + } + else if (valbool) + sock_set_flag(sk, SOCK_DBG); + else + sock_reset_flag(sk, SOCK_DBG); + break; + case SO_REUSEADDR: + sk->sk_reuse = valbool; + break; + case SO_TYPE: + case SO_ERROR: + ret = -ENOPROTOOPT; + break; + case SO_DONTROUTE: + if (valbool) + sock_set_flag(sk, SOCK_LOCALROUTE); + else + sock_reset_flag(sk, SOCK_LOCALROUTE); + break; + case SO_BROADCAST: + sock_valbool_flag(sk, SOCK_BROADCAST, valbool); + break; + case SO_SNDBUF: + /* Don't error on this BSD doesn't and if you think + about it this is right. Otherwise apps have to + play 'guess the biggest size' games. RCVBUF/SNDBUF + are treated in BSD as hints */ - if (val > sysctl_wmem_max) - val = sysctl_wmem_max; + if (val > sysctl_wmem_max) + val = sysctl_wmem_max; set_sndbuf: - sk->sk_userlocks |= SOCK_SNDBUF_LOCK; - if ((val * 2) < SOCK_MIN_SNDBUF) - sk->sk_sndbuf = SOCK_MIN_SNDBUF; - else - sk->sk_sndbuf = val * 2; + sk->sk_userlocks |= SOCK_SNDBUF_LOCK; + if ((val * 2) < SOCK_MIN_SNDBUF) + sk->sk_sndbuf = SOCK_MIN_SNDBUF; + else + sk->sk_sndbuf = val * 2; - /* - * Wake up sending tasks if we - * upped the value. - */ - sk->sk_write_space(sk); - break; + /* + * Wake up sending tasks if we + * upped the value. + */ + sk->sk_write_space(sk); + break; - case SO_SNDBUFFORCE: - if (!capable(CAP_NET_ADMIN)) { - ret = -EPERM; - break; - } - goto set_sndbuf; + case SO_SNDBUFFORCE: + if (!capable(CAP_NET_ADMIN)) { + ret = -EPERM; + break; + } + goto set_sndbuf; - case SO_RCVBUF: - /* Don't error on this BSD doesn't and if you think - about it this is right. Otherwise apps have to - play 'guess the biggest size' games. RCVBUF/SNDBUF - are treated in BSD as hints */ + case SO_RCVBUF: + /* Don't error on this BSD doesn't and if you think + about it this is right. Otherwise apps have to + play 'guess the biggest size' games. RCVBUF/SNDBUF + are treated in BSD as hints */ - if (val > sysctl_rmem_max) - val = sysctl_rmem_max; + if (val > sysctl_rmem_max) + val = sysctl_rmem_max; set_rcvbuf: - sk->sk_userlocks |= SOCK_RCVBUF_LOCK; - /* - * We double it on the way in to account for - * "struct sk_buff" etc. overhead. Applications - * assume that the SO_RCVBUF setting they make will - * allow that much actual data to be received on that - * socket. - * - * Applications are unaware that "struct sk_buff" and - * other overheads allocate from the receive buffer - * during socket buffer allocation. - * - * And after considering the possible alternatives, - * returning the value we actually used in getsockopt - * is the most desirable behavior. - */ - if ((val * 2) < SOCK_MIN_RCVBUF) - sk->sk_rcvbuf = SOCK_MIN_RCVBUF; - else - sk->sk_rcvbuf = val * 2; + sk->sk_userlocks |= SOCK_RCVBUF_LOCK; + /* + * We double it on the way in to account for + * "struct sk_buff" etc. overhead. Applications + * assume that the SO_RCVBUF setting they make will + * allow that much actual data to be received on that + * socket. + * + * Applications are unaware that "struct sk_buff" and + * other overheads allocate from the receive buffer + * during socket buffer allocation. + * + * And after considering the possible alternatives, + * returning the value we actually used in getsockopt + * is the most desirable behavior. + */ + if ((val * 2) < SOCK_MIN_RCVBUF) + sk->sk_rcvbuf = SOCK_MIN_RCVBUF; + else + sk->sk_rcvbuf = val * 2; + break; + + case SO_RCVBUFFORCE: + if (!capable(CAP_NET_ADMIN)) { + ret = -EPERM; break; + } + goto set_rcvbuf; - case SO_RCVBUFFORCE: - if (!capable(CAP_NET_ADMIN)) { - ret = -EPERM; - break; - } - goto set_rcvbuf; - - case SO_KEEPALIVE: + case SO_KEEPALIVE: #ifdef CONFIG_INET - if (sk->sk_protocol == IPPROTO_TCP) - tcp_set_keepalive(sk, valbool); + if (sk->sk_protocol == IPPROTO_TCP) + tcp_set_keepalive(sk, valbool); #endif - sock_valbool_flag(sk, SOCK_KEEPOPEN, valbool); - break; + sock_valbool_flag(sk, SOCK_KEEPOPEN, valbool); + break; - case SO_OOBINLINE: - sock_valbool_flag(sk, SOCK_URGINLINE, valbool); + case SO_OOBINLINE: + sock_valbool_flag(sk, SOCK_URGINLINE, valbool); + break; + + case SO_NO_CHECK: + sk->sk_no_check = valbool; + break; + + case SO_PRIORITY: + if ((val >= 0 && val <= 6) || capable(CAP_NET_ADMIN)) + sk->sk_priority = val; + else + ret = -EPERM; + break; + + case SO_LINGER: + if (optlen < sizeof(ling)) { + ret = -EINVAL; /* 1003.1g */ break; - - case SO_NO_CHECK: - sk->sk_no_check = valbool; - break; - - case SO_PRIORITY: - if ((val >= 0 && val <= 6) || capable(CAP_NET_ADMIN)) - sk->sk_priority = val; - else - ret = -EPERM; + } + if (copy_from_user(&ling,optval,sizeof(ling))) { + ret = -EFAULT; break; - - case SO_LINGER: - if(optlen= MAX_SCHEDULE_TIMEOUT/HZ) - sk->sk_lingertime = MAX_SCHEDULE_TIMEOUT; - else + if ((unsigned int)ling.l_linger >= MAX_SCHEDULE_TIMEOUT/HZ) + sk->sk_lingertime = MAX_SCHEDULE_TIMEOUT; + else #endif - sk->sk_lingertime = (unsigned int)ling.l_linger * HZ; - sock_set_flag(sk, SOCK_LINGER); - } - break; - - case SO_BSDCOMPAT: - sock_warn_obsolete_bsdism("setsockopt"); - break; + sk->sk_lingertime = (unsigned int)ling.l_linger * HZ; + sock_set_flag(sk, SOCK_LINGER); + } + break; - case SO_PASSCRED: - if (valbool) - set_bit(SOCK_PASSCRED, &sock->flags); + case SO_BSDCOMPAT: + sock_warn_obsolete_bsdism("setsockopt"); + break; + + case SO_PASSCRED: + if (valbool) + set_bit(SOCK_PASSCRED, &sock->flags); + else + clear_bit(SOCK_PASSCRED, &sock->flags); + break; + + case SO_TIMESTAMP: + case SO_TIMESTAMPNS: + if (valbool) { + if (optname == SO_TIMESTAMP) + sock_reset_flag(sk, SOCK_RCVTSTAMPNS); else - clear_bit(SOCK_PASSCRED, &sock->flags); - break; + sock_set_flag(sk, SOCK_RCVTSTAMPNS); + sock_set_flag(sk, SOCK_RCVTSTAMP); + sock_enable_timestamp(sk); + } else { + sock_reset_flag(sk, SOCK_RCVTSTAMP); + sock_reset_flag(sk, SOCK_RCVTSTAMPNS); + } + break; - case SO_TIMESTAMP: - if (valbool) { - sock_set_flag(sk, SOCK_RCVTSTAMP); - sock_enable_timestamp(sk); - } else - sock_reset_flag(sk, SOCK_RCVTSTAMP); - break; + case SO_RCVLOWAT: + if (val < 0) + val = INT_MAX; + sk->sk_rcvlowat = val ? : 1; + break; + + case SO_RCVTIMEO: + ret = sock_set_timeout(&sk->sk_rcvtimeo, optval, optlen); + break; + + case SO_SNDTIMEO: + ret = sock_set_timeout(&sk->sk_sndtimeo, optval, optlen); + break; - case SO_RCVLOWAT: - if (val < 0) - val = INT_MAX; - sk->sk_rcvlowat = val ? : 1; - break; +#ifdef CONFIG_NETDEVICES + case SO_BINDTODEVICE: + { + char devname[IFNAMSIZ]; - case SO_RCVTIMEO: - ret = sock_set_timeout(&sk->sk_rcvtimeo, optval, optlen); + /* Sorry... */ + if (!capable(CAP_NET_RAW)) { + ret = -EPERM; break; + } - case SO_SNDTIMEO: - ret = sock_set_timeout(&sk->sk_sndtimeo, optval, optlen); - break; + /* Bind this socket to a particular device like "eth0", + * as specified in the passed interface name. If the + * name is "" or the option length is zero the socket + * is not bound. + */ -#ifdef CONFIG_NETDEVICES - case SO_BINDTODEVICE: - { - char devname[IFNAMSIZ]; - - /* Sorry... */ - if (!capable(CAP_NET_RAW)) { - ret = -EPERM; + if (!valbool) { + sk->sk_bound_dev_if = 0; + } else { + if (optlen > IFNAMSIZ - 1) + optlen = IFNAMSIZ - 1; + memset(devname, 0, sizeof(devname)); + if (copy_from_user(devname, optval, optlen)) { + ret = -EFAULT; break; } - /* Bind this socket to a particular device like "eth0", - * as specified in the passed interface name. If the - * name is "" or the option length is zero the socket - * is not bound. - */ + /* Remove any cached route for this socket. */ + sk_dst_reset(sk); - if (!valbool) { + if (devname[0] == '\0') { sk->sk_bound_dev_if = 0; } else { - if (optlen > IFNAMSIZ - 1) - optlen = IFNAMSIZ - 1; - memset(devname, 0, sizeof(devname)); - if (copy_from_user(devname, optval, optlen)) { - ret = -EFAULT; + struct net_device *dev = dev_get_by_name(devname); + if (!dev) { + ret = -ENODEV; break; } - - /* Remove any cached route for this socket. */ - sk_dst_reset(sk); - - if (devname[0] == '\0') { - sk->sk_bound_dev_if = 0; - } else { - struct net_device *dev = dev_get_by_name(devname); - if (!dev) { - ret = -ENODEV; - break; - } - sk->sk_bound_dev_if = dev->ifindex; - dev_put(dev); - } + sk->sk_bound_dev_if = dev->ifindex; + dev_put(dev); } - break; } + break; + } #endif - case SO_ATTACH_FILTER: - ret = -EINVAL; - if (optlen == sizeof(struct sock_fprog)) { - struct sock_fprog fprog; + case SO_ATTACH_FILTER: + ret = -EINVAL; + if (optlen == sizeof(struct sock_fprog)) { + struct sock_fprog fprog; - ret = -EFAULT; - if (copy_from_user(&fprog, optval, sizeof(fprog))) - break; + ret = -EFAULT; + if (copy_from_user(&fprog, optval, sizeof(fprog))) + break; - ret = sk_attach_filter(&fprog, sk); - } - break; + ret = sk_attach_filter(&fprog, sk); + } + break; - case SO_DETACH_FILTER: - rcu_read_lock_bh(); - filter = rcu_dereference(sk->sk_filter); - if (filter) { - rcu_assign_pointer(sk->sk_filter, NULL); - sk_filter_release(sk, filter); - rcu_read_unlock_bh(); - break; - } + case SO_DETACH_FILTER: + rcu_read_lock_bh(); + filter = rcu_dereference(sk->sk_filter); + if (filter) { + rcu_assign_pointer(sk->sk_filter, NULL); + sk_filter_release(sk, filter); rcu_read_unlock_bh(); - ret = -ENONET; - break; - - case SO_PASSSEC: - if (valbool) - set_bit(SOCK_PASSSEC, &sock->flags); - else - clear_bit(SOCK_PASSSEC, &sock->flags); break; + } + rcu_read_unlock_bh(); + ret = -ENONET; + break; + + case SO_PASSSEC: + if (valbool) + set_bit(SOCK_PASSSEC, &sock->flags); + else + clear_bit(SOCK_PASSSEC, &sock->flags); + break; /* We implement the SO_SNDLOWAT etc to not be settable (1003.1g 5.3) */ - default: - ret = -ENOPROTOOPT; - break; + default: + ret = -ENOPROTOOPT; + break; } release_sock(sk); return ret; @@ -641,8 +646,7 @@ int sock_getsockopt(struct socket *sock, { struct sock *sk = sock->sk; - union - { + union { int val; struct linger ling; struct timeval tm; @@ -651,148 +655,153 @@ int sock_getsockopt(struct socket *sock, unsigned int lv = sizeof(int); int len; - if(get_user(len,optlen)) + if (get_user(len, optlen)) return -EFAULT; - if(len < 0) + if (len < 0) return -EINVAL; - switch(optname) - { - case SO_DEBUG: - v.val = sock_flag(sk, SOCK_DBG); - break; - - case SO_DONTROUTE: - v.val = sock_flag(sk, SOCK_LOCALROUTE); - break; - - case SO_BROADCAST: - v.val = !!sock_flag(sk, SOCK_BROADCAST); - break; - - case SO_SNDBUF: - v.val = sk->sk_sndbuf; - break; - - case SO_RCVBUF: - v.val = sk->sk_rcvbuf; - break; - - case SO_REUSEADDR: - v.val = sk->sk_reuse; - break; - - case SO_KEEPALIVE: - v.val = !!sock_flag(sk, SOCK_KEEPOPEN); - break; - - case SO_TYPE: - v.val = sk->sk_type; - break; - - case SO_ERROR: - v.val = -sock_error(sk); - if(v.val==0) - v.val = xchg(&sk->sk_err_soft, 0); - break; - - case SO_OOBINLINE: - v.val = !!sock_flag(sk, SOCK_URGINLINE); - break; - - case SO_NO_CHECK: - v.val = sk->sk_no_check; - break; - - case SO_PRIORITY: - v.val = sk->sk_priority; - break; - - case SO_LINGER: - lv = sizeof(v.ling); - v.ling.l_onoff = !!sock_flag(sk, SOCK_LINGER); - v.ling.l_linger = sk->sk_lingertime / HZ; - break; - - case SO_BSDCOMPAT: - sock_warn_obsolete_bsdism("getsockopt"); - break; - - case SO_TIMESTAMP: - v.val = sock_flag(sk, SOCK_RCVTSTAMP); - break; - - case SO_RCVTIMEO: - lv=sizeof(struct timeval); - if (sk->sk_rcvtimeo == MAX_SCHEDULE_TIMEOUT) { - v.tm.tv_sec = 0; - v.tm.tv_usec = 0; - } else { - v.tm.tv_sec = sk->sk_rcvtimeo / HZ; - v.tm.tv_usec = ((sk->sk_rcvtimeo % HZ) * 1000000) / HZ; - } - break; - - case SO_SNDTIMEO: - lv=sizeof(struct timeval); - if (sk->sk_sndtimeo == MAX_SCHEDULE_TIMEOUT) { - v.tm.tv_sec = 0; - v.tm.tv_usec = 0; - } else { - v.tm.tv_sec = sk->sk_sndtimeo / HZ; - v.tm.tv_usec = ((sk->sk_sndtimeo % HZ) * 1000000) / HZ; - } - break; - - case SO_RCVLOWAT: - v.val = sk->sk_rcvlowat; - break; + switch(optname) { + case SO_DEBUG: + v.val = sock_flag(sk, SOCK_DBG); + break; + + case SO_DONTROUTE: + v.val = sock_flag(sk, SOCK_LOCALROUTE); + break; + + case SO_BROADCAST: + v.val = !!sock_flag(sk, SOCK_BROADCAST); + break; + + case SO_SNDBUF: + v.val = sk->sk_sndbuf; + break; + + case SO_RCVBUF: + v.val = sk->sk_rcvbuf; + break; + + case SO_REUSEADDR: + v.val = sk->sk_reuse; + break; + + case SO_KEEPALIVE: + v.val = !!sock_flag(sk, SOCK_KEEPOPEN); + break; + + case SO_TYPE: + v.val = sk->sk_type; + break; + + case SO_ERROR: + v.val = -sock_error(sk); + if (v.val==0) + v.val = xchg(&sk->sk_err_soft, 0); + break; + + case SO_OOBINLINE: + v.val = !!sock_flag(sk, SOCK_URGINLINE); + break; + + case SO_NO_CHECK: + v.val = sk->sk_no_check; + break; + + case SO_PRIORITY: + v.val = sk->sk_priority; + break; + + case SO_LINGER: + lv = sizeof(v.ling); + v.ling.l_onoff = !!sock_flag(sk, SOCK_LINGER); + v.ling.l_linger = sk->sk_lingertime / HZ; + break; + + case SO_BSDCOMPAT: + sock_warn_obsolete_bsdism("getsockopt"); + break; + + case SO_TIMESTAMP: + v.val = sock_flag(sk, SOCK_RCVTSTAMP) && + !sock_flag(sk, SOCK_RCVTSTAMPNS); + break; + + case SO_TIMESTAMPNS: + v.val = sock_flag(sk, SOCK_RCVTSTAMPNS); + break; + + case SO_RCVTIMEO: + lv=sizeof(struct timeval); + if (sk->sk_rcvtimeo == MAX_SCHEDULE_TIMEOUT) { + v.tm.tv_sec = 0; + v.tm.tv_usec = 0; + } else { + v.tm.tv_sec = sk->sk_rcvtimeo / HZ; + v.tm.tv_usec = ((sk->sk_rcvtimeo % HZ) * 1000000) / HZ; + } + break; - case SO_SNDLOWAT: - v.val=1; - break; + case SO_SNDTIMEO: + lv=sizeof(struct timeval); + if (sk->sk_sndtimeo == MAX_SCHEDULE_TIMEOUT) { + v.tm.tv_sec = 0; + v.tm.tv_usec = 0; + } else { + v.tm.tv_sec = sk->sk_sndtimeo / HZ; + v.tm.tv_usec = ((sk->sk_sndtimeo % HZ) * 1000000) / HZ; + } + break; - case SO_PASSCRED: - v.val = test_bit(SOCK_PASSCRED, &sock->flags) ? 1 : 0; - break; + case SO_RCVLOWAT: + v.val = sk->sk_rcvlowat; + break; + + case SO_SNDLOWAT: + v.val=1; + break; + + case SO_PASSCRED: + v.val = test_bit(SOCK_PASSCRED, &sock->flags) ? 1 : 0; + break; + + case SO_PEERCRED: + if (len > sizeof(sk->sk_peercred)) + len = sizeof(sk->sk_peercred); + if (copy_to_user(optval, &sk->sk_peercred, len)) + return -EFAULT; + goto lenout; - case SO_PEERCRED: - if (len > sizeof(sk->sk_peercred)) - len = sizeof(sk->sk_peercred); - if (copy_to_user(optval, &sk->sk_peercred, len)) - return -EFAULT; - goto lenout; - - case SO_PEERNAME: - { - char address[128]; - - if (sock->ops->getname(sock, (struct sockaddr *)address, &lv, 2)) - return -ENOTCONN; - if (lv < len) - return -EINVAL; - if (copy_to_user(optval, address, len)) - return -EFAULT; - goto lenout; - } + case SO_PEERNAME: + { + char address[128]; - /* Dubious BSD thing... Probably nobody even uses it, but - * the UNIX standard wants it for whatever reason... -DaveM - */ - case SO_ACCEPTCONN: - v.val = sk->sk_state == TCP_LISTEN; - break; + if (sock->ops->getname(sock, (struct sockaddr *)address, &lv, 2)) + return -ENOTCONN; + if (lv < len) + return -EINVAL; + if (copy_to_user(optval, address, len)) + return -EFAULT; + goto lenout; + } - case SO_PASSSEC: - v.val = test_bit(SOCK_PASSSEC, &sock->flags) ? 1 : 0; - break; + /* Dubious BSD thing... Probably nobody even uses it, but + * the UNIX standard wants it for whatever reason... -DaveM + */ + case SO_ACCEPTCONN: + v.val = sk->sk_state == TCP_LISTEN; + break; + + case SO_PASSSEC: + v.val = test_bit(SOCK_PASSSEC, &sock->flags) ? 1 : 0; + break; - case SO_PEERSEC: - return security_socket_getpeersec_stream(sock, optval, optlen, len); + case SO_PEERSEC: + return security_socket_getpeersec_stream(sock, optval, optlen, len); - default: - return(-ENOPROTOOPT); + default: + return -ENOPROTOOPT; } + if (len > lv) len = lv; if (copy_to_user(optval, &v, len)) @@ -904,6 +913,7 @@ struct sock *sk_clone(const struct sock sk_node_init(&newsk->sk_node); sock_lock_init(newsk); bh_lock_sock(newsk); + newsk->sk_backlog.head = newsk->sk_backlog.tail = NULL; atomic_set(&newsk->sk_rmem_alloc, 0); atomic_set(&newsk->sk_wmem_alloc, 0); @@ -923,7 +933,6 @@ struct sock *sk_clone(const struct sock newsk->sk_wmem_queued = 0; newsk->sk_forward_alloc = 0; newsk->sk_send_head = NULL; - newsk->sk_backlog.head = newsk->sk_backlog.tail = NULL; newsk->sk_userlocks = sk->sk_userlocks & ~SOCK_BINDPORT_LOCK; sock_reset_flag(newsk, SOCK_DONE); @@ -970,6 +979,21 @@ out: EXPORT_SYMBOL_GPL(sk_clone); +void sk_setup_caps(struct sock *sk, struct dst_entry *dst) +{ + __sk_dst_set(sk, dst); + sk->sk_route_caps = dst->dev->features; + if (sk->sk_route_caps & NETIF_F_GSO) + sk->sk_route_caps |= NETIF_F_GSO_MASK; + if (sk_can_gso(sk)) { + if (dst->header_len) + sk->sk_route_caps &= ~NETIF_F_GSO_MASK; + else + sk->sk_route_caps |= NETIF_F_SG | NETIF_F_HW_CSUM; + } +} +EXPORT_SYMBOL_GPL(sk_setup_caps); + void __init sk_init(void) { if (num_physpages <= 4096) { @@ -1220,13 +1244,13 @@ static void __lock_sock(struct sock *sk) { DEFINE_WAIT(wait); - for(;;) { + for (;;) { prepare_to_wait_exclusive(&sk->sk_lock.wq, &wait, TASK_UNINTERRUPTIBLE); spin_unlock_bh(&sk->sk_lock.slock); schedule(); spin_lock_bh(&sk->sk_lock.slock); - if(!sock_owned_by_user(sk)) + if (!sock_owned_by_user(sk)) break; } finish_wait(&sk->sk_lock.wq, &wait); @@ -1258,7 +1282,7 @@ static void __release_sock(struct sock * } while (skb != NULL); bh_lock_sock(sk); - } while((skb = sk->sk_backlog.head) != NULL); + } while ((skb = sk->sk_backlog.head) != NULL); } /** @@ -1420,7 +1444,7 @@ static void sock_def_write_space(struct /* Do not wake up a writer until he can make "significant" * progress. --DaveM */ - if((atomic_read(&sk->sk_wmem_alloc) << 1) <= sk->sk_sndbuf) { + if ((atomic_read(&sk->sk_wmem_alloc) << 1) <= sk->sk_sndbuf) { if (sk->sk_sleep && waitqueue_active(sk->sk_sleep)) wake_up_interruptible(sk->sk_sleep); @@ -1482,8 +1506,7 @@ void sock_init_data(struct socket *sock, sock_set_flag(sk, SOCK_ZAPPED); - if(sock) - { + if (sock) { sk->sk_type = sock->type; sk->sk_sleep = &sock->wait; sock->sk = sk; @@ -1512,8 +1535,7 @@ void sock_init_data(struct socket *sock, sk->sk_rcvtimeo = MAX_SCHEDULE_TIMEOUT; sk->sk_sndtimeo = MAX_SCHEDULE_TIMEOUT; - sk->sk_stamp.tv_sec = -1L; - sk->sk_stamp.tv_usec = -1L; + sk->sk_stamp = ktime_set(-1L, -1L); atomic_set(&sk->sk_refcnt, 1); } @@ -1554,17 +1576,36 @@ EXPORT_SYMBOL(release_sock); int sock_get_timestamp(struct sock *sk, struct timeval __user *userstamp) { + struct timeval tv; if (!sock_flag(sk, SOCK_TIMESTAMP)) sock_enable_timestamp(sk); - if (sk->sk_stamp.tv_sec == -1) + tv = ktime_to_timeval(sk->sk_stamp); + if (tv.tv_sec == -1) return -ENOENT; - if (sk->sk_stamp.tv_sec == 0) - do_gettimeofday(&sk->sk_stamp); - return copy_to_user(userstamp, &sk->sk_stamp, sizeof(struct timeval)) ? - -EFAULT : 0; + if (tv.tv_sec == 0) { + sk->sk_stamp = ktime_get_real(); + tv = ktime_to_timeval(sk->sk_stamp); + } + return copy_to_user(userstamp, &tv, sizeof(tv)) ? -EFAULT : 0; } EXPORT_SYMBOL(sock_get_timestamp); +int sock_get_timestampns(struct sock *sk, struct timespec __user *userstamp) +{ + struct timespec ts; + if (!sock_flag(sk, SOCK_TIMESTAMP)) + sock_enable_timestamp(sk); + ts = ktime_to_timespec(sk->sk_stamp); + if (ts.tv_sec == -1) + return -ENOENT; + if (ts.tv_sec == 0) { + sk->sk_stamp = ktime_get_real(); + ts = ktime_to_timespec(sk->sk_stamp); + } + return copy_to_user(userstamp, &ts, sizeof(ts)) ? -EFAULT : 0; +} +EXPORT_SYMBOL(sock_get_timestampns); + void sock_enable_timestamp(struct sock *sk) { if (!sock_flag(sk, SOCK_TIMESTAMP)) { @@ -1899,7 +1940,7 @@ static int proto_seq_show(struct seq_fil return 0; } -static struct seq_operations proto_seq_ops = { +static const struct seq_operations proto_seq_ops = { .start = proto_seq_start, .next = proto_seq_next, .stop = proto_seq_stop, diff -puN net/core/sysctl_net_core.c~git-net net/core/sysctl_net_core.c --- a/net/core/sysctl_net_core.c~git-net +++ a/net/core/sysctl_net_core.c @@ -136,6 +136,14 @@ ctl_table core_table[] = { .mode = 0644, .proc_handler = &proc_dointvec }, + { + .ctl_name = NET_CORE_WARNINGS, + .procname = "warnings", + .data = &net_msg_warn, + .maxlen = sizeof(int), + .mode = 0644, + .proc_handler = &proc_dointvec + }, { .ctl_name = 0 } }; diff -puN net/core/utils.c~git-net net/core/utils.c --- a/net/core/utils.c~git-net +++ a/net/core/utils.c @@ -30,8 +30,10 @@ #include #include -int net_msg_cost = 5*HZ; -int net_msg_burst = 10; +int net_msg_cost __read_mostly = 5*HZ; +int net_msg_burst __read_mostly = 10; +int net_msg_warn __read_mostly = 1; +EXPORT_SYMBOL(net_msg_warn); /* * All net warning printk()s should be guarded by this function. diff -puN net/core/wireless.c~git-net net/core/wireless.c --- a/net/core/wireless.c~git-net +++ a/net/core/wireless.c @@ -104,12 +104,10 @@ /* Debugging stuff */ #undef WE_IOCTL_DEBUG /* Debug IOCTL API */ -#undef WE_RTNETLINK_DEBUG /* Debug RtNetlink API */ #undef WE_EVENT_DEBUG /* Debug Event dispatcher */ #undef WE_SPY_DEBUG /* Debug enhanced spy support */ /* Options */ -//CONFIG_NET_WIRELESS_RTNETLINK /* Wireless requests over RtNetlink */ #define WE_EVENT_RTNETLINK /* Propagate events using RtNetlink */ #define WE_SET_EVENT /* Generate an event on some set commands */ @@ -349,8 +347,7 @@ static const struct iw_ioctl_description .max_tokens = sizeof(struct iw_pmksa), }, }; -static const unsigned standard_ioctl_num = (sizeof(standard_ioctl) / - sizeof(struct iw_ioctl_description)); +static const unsigned standard_ioctl_num = ARRAY_SIZE(standard_ioctl); /* * Meta-data about all the additional standard Wireless Extension events @@ -400,8 +397,7 @@ static const struct iw_ioctl_description .max_tokens = sizeof(struct iw_pmkid_cand), }, }; -static const unsigned standard_event_num = (sizeof(standard_event) / - sizeof(struct iw_ioctl_description)); +static const unsigned standard_event_num = ARRAY_SIZE(standard_event); /* Size (in bytes) of the various private data types */ static const char iw_priv_type_size[] = { @@ -463,17 +459,17 @@ static inline iw_handler get_handler(str unsigned int index; /* *MUST* be unsigned */ /* Check if we have some wireless handlers defined */ - if(dev->wireless_handlers == NULL) + if (dev->wireless_handlers == NULL) return NULL; /* Try as a standard command */ index = cmd - SIOCIWFIRST; - if(index < dev->wireless_handlers->num_standard) + if (index < dev->wireless_handlers->num_standard) return dev->wireless_handlers->standard[index]; /* Try as a private command */ index = cmd - SIOCIWFIRSTPRIV; - if(index < dev->wireless_handlers->num_private) + if (index < dev->wireless_handlers->num_private) return dev->wireless_handlers->private[index]; /* Not found */ @@ -487,7 +483,7 @@ static inline iw_handler get_handler(str static inline struct iw_statistics *get_wireless_stats(struct net_device *dev) { /* New location */ - if((dev->wireless_handlers != NULL) && + if ((dev->wireless_handlers != NULL) && (dev->wireless_handlers->get_wireless_stats != NULL)) return dev->wireless_handlers->get_wireless_stats(dev); @@ -516,7 +512,7 @@ static inline struct iw_statistics *get_ */ static inline int call_commit_handler(struct net_device * dev) { - if((netif_running(dev)) && + if ((netif_running(dev)) && (dev->wireless_handlers->standard[0] != NULL)) { /* Call the commit handler on the driver */ return dev->wireless_handlers->standard[0](dev, NULL, @@ -577,7 +573,7 @@ static int iw_handler_get_iwstats(struct wrqu->data.length = sizeof(struct iw_statistics); /* Check if we need to clear the updated flag */ - if(wrqu->data.flags != 0) + if (wrqu->data.flags != 0) stats->qual.updated &= ~IW_QUAL_ALL_UPDATED; return 0; } else @@ -596,12 +592,12 @@ static int iw_handler_get_private(struct char * extra) { /* Check if the driver has something to export */ - if((dev->wireless_handlers->num_private_args == 0) || + if ((dev->wireless_handlers->num_private_args == 0) || (dev->wireless_handlers->private_args == NULL)) return -EOPNOTSUPP; /* Check if there is enough buffer up there */ - if(wrqu->data.length < dev->wireless_handlers->num_private_args) { + if (wrqu->data.length < dev->wireless_handlers->num_private_args) { /* User space can't know in advance how large the buffer * needs to be. Give it a hint, so that we can support * any size buffer we want somewhat efficiently... */ @@ -680,7 +676,7 @@ static int wireless_seq_show(struct seq_ return 0; } -static struct seq_operations wireless_seq_ops = { +static const struct seq_operations wireless_seq_ops = { .start = dev_seq_start, .next = dev_seq_next, .stop = dev_seq_stop, @@ -735,7 +731,7 @@ static int ioctl_standard_call(struct ne int ret = -EINVAL; /* Get the description of the IOCTL */ - if((cmd - SIOCIWFIRST) >= standard_ioctl_num) + if ((cmd - SIOCIWFIRST) >= standard_ioctl_num) return -EOPNOTSUPP; descr = &(standard_ioctl[cmd - SIOCIWFIRST]); @@ -750,14 +746,14 @@ static int ioctl_standard_call(struct ne info.flags = 0; /* Check if we have a pointer to user space data or not */ - if(descr->header_type != IW_HEADER_TYPE_POINT) { + if (descr->header_type != IW_HEADER_TYPE_POINT) { /* No extra arguments. Trivial to handle */ ret = handler(dev, &info, &(iwr->u), NULL); #ifdef WE_SET_EVENT /* Generate an event to notify listeners of the change */ - if((descr->flags & IW_DESCR_FLAG_EVENT) && + if ((descr->flags & IW_DESCR_FLAG_EVENT) && ((ret == 0) || (ret == -EIWCOMMIT))) wireless_send_event(dev, cmd, &(iwr->u), NULL); #endif /* WE_SET_EVENT */ @@ -800,19 +796,19 @@ static int ioctl_standard_call(struct ne iwr->u.data.length -= essid_compat; /* Check what user space is giving us */ - if(IW_IS_SET(cmd)) { + if (IW_IS_SET(cmd)) { /* Check NULL pointer */ - if((iwr->u.data.pointer == NULL) && + if ((iwr->u.data.pointer == NULL) && (iwr->u.data.length != 0)) return -EFAULT; /* Check if number of token fits within bounds */ - if(iwr->u.data.length > descr->max_tokens) + if (iwr->u.data.length > descr->max_tokens) return -E2BIG; - if(iwr->u.data.length < descr->min_tokens) + if (iwr->u.data.length < descr->min_tokens) return -EINVAL; } else { /* Check NULL pointer */ - if(iwr->u.data.pointer == NULL) + if (iwr->u.data.pointer == NULL) return -EFAULT; /* Save user space buffer size for checking */ user_length = iwr->u.data.length; @@ -822,7 +818,7 @@ static int ioctl_standard_call(struct ne * implied by the test at the end. */ /* Support for very large requests */ - if((descr->flags & IW_DESCR_FLAG_NOMAX) && + if ((descr->flags & IW_DESCR_FLAG_NOMAX) && (user_length > descr->max_tokens)) { /* Allow userspace to GET more than max so * we can support any size GET requests. @@ -848,7 +844,7 @@ static int ioctl_standard_call(struct ne } /* If it is a SET, get all the extra data in here */ - if(IW_IS_SET(cmd) && (iwr->u.data.length != 0)) { + if (IW_IS_SET(cmd) && (iwr->u.data.length != 0)) { err = copy_from_user(extra, iwr->u.data.pointer, iwr->u.data.length * descr->token_size); @@ -871,7 +867,7 @@ static int ioctl_standard_call(struct ne /* If we have something to return to the user */ if (!ret && IW_IS_GET(cmd)) { /* Check if there is enough buffer up there */ - if(user_length < iwr->u.data.length) { + if (user_length < iwr->u.data.length) { kfree(extra); return -E2BIG; } @@ -890,9 +886,9 @@ static int ioctl_standard_call(struct ne #ifdef WE_SET_EVENT /* Generate an event to notify listeners of the change */ - if((descr->flags & IW_DESCR_FLAG_EVENT) && + if ((descr->flags & IW_DESCR_FLAG_EVENT) && ((ret == 0) || (ret == -EIWCOMMIT))) { - if(descr->flags & IW_DESCR_FLAG_RESTRICT) + if (descr->flags & IW_DESCR_FLAG_RESTRICT) /* If the event is restricted, don't * export the payload */ wireless_send_event(dev, cmd, &(iwr->u), NULL); @@ -907,7 +903,7 @@ static int ioctl_standard_call(struct ne } /* Call commit handler if needed and defined */ - if(ret == -EIWCOMMIT) + if (ret == -EIWCOMMIT) ret = call_commit_handler(dev); /* Here, we will generate the appropriate event if needed */ @@ -944,8 +940,8 @@ static inline int ioctl_private_call(str int ret = -EINVAL; /* Get the description of the IOCTL */ - for(i = 0; i < dev->wireless_handlers->num_private_args; i++) - if(cmd == dev->wireless_handlers->private_args[i].cmd) { + for (i = 0; i < dev->wireless_handlers->num_private_args; i++) + if (cmd == dev->wireless_handlers->private_args[i].cmd) { descr = &(dev->wireless_handlers->private_args[i]); break; } @@ -953,7 +949,7 @@ static inline int ioctl_private_call(str #ifdef WE_IOCTL_DEBUG printk(KERN_DEBUG "%s (WE) : Found private handler for 0x%04X\n", ifr->ifr_name, cmd); - if(descr) { + if (descr) { printk(KERN_DEBUG "%s (WE) : Name %s, set %X, get %X\n", dev->name, descr->name, descr->set_args, descr->get_args); @@ -961,11 +957,11 @@ static inline int ioctl_private_call(str #endif /* WE_IOCTL_DEBUG */ /* Compute the size of the set/get arguments */ - if(descr != NULL) { - if(IW_IS_SET(cmd)) { + if (descr != NULL) { + if (IW_IS_SET(cmd)) { int offset = 0; /* For sub-ioctls */ /* Check for sub-ioctl handler */ - if(descr->name[0] == '\0') + if (descr->name[0] == '\0') /* Reserve one int for sub-ioctl index */ offset = sizeof(__u32); @@ -973,7 +969,7 @@ static inline int ioctl_private_call(str extra_size = get_priv_size(descr->set_args); /* Does it fits in iwr ? */ - if((descr->set_args & IW_PRIV_SIZE_FIXED) && + if ((descr->set_args & IW_PRIV_SIZE_FIXED) && ((extra_size + offset) <= IFNAMSIZ)) extra_size = 0; } else { @@ -981,7 +977,7 @@ static inline int ioctl_private_call(str extra_size = get_priv_size(descr->get_args); /* Does it fits in iwr ? */ - if((descr->get_args & IW_PRIV_SIZE_FIXED) && + if ((descr->get_args & IW_PRIV_SIZE_FIXED) && (extra_size <= IFNAMSIZ)) extra_size = 0; } @@ -992,7 +988,7 @@ static inline int ioctl_private_call(str info.flags = 0; /* Check if we have a pointer to user space data or not. */ - if(extra_size == 0) { + if (extra_size == 0) { /* No extra arguments. Trivial to handle */ ret = handler(dev, &info, &(iwr->u), (char *) &(iwr->u)); } else { @@ -1000,19 +996,19 @@ static inline int ioctl_private_call(str int err; /* Check what user space is giving us */ - if(IW_IS_SET(cmd)) { + if (IW_IS_SET(cmd)) { /* Check NULL pointer */ - if((iwr->u.data.pointer == NULL) && + if ((iwr->u.data.pointer == NULL) && (iwr->u.data.length != 0)) return -EFAULT; /* Does it fits within bounds ? */ - if(iwr->u.data.length > (descr->set_args & + if (iwr->u.data.length > (descr->set_args & IW_PRIV_SIZE_MASK)) return -E2BIG; } else { /* Check NULL pointer */ - if(iwr->u.data.pointer == NULL) + if (iwr->u.data.pointer == NULL) return -EFAULT; } @@ -1029,7 +1025,7 @@ static inline int ioctl_private_call(str } /* If it is a SET, get all the extra data in here */ - if(IW_IS_SET(cmd) && (iwr->u.data.length != 0)) { + if (IW_IS_SET(cmd) && (iwr->u.data.length != 0)) { err = copy_from_user(extra, iwr->u.data.pointer, extra_size); if (err) { @@ -1071,7 +1067,7 @@ static inline int ioctl_private_call(str /* Call commit handler if needed and defined */ - if(ret == -EIWCOMMIT) + if (ret == -EIWCOMMIT) ret = call_commit_handler(dev); return ret; @@ -1098,788 +1094,54 @@ int wireless_process_ioctl(struct ifreq /* A bunch of special cases, then the generic case... * Note that 'cmd' is already filtered in dev_ioctl() with * (cmd >= SIOCIWFIRST && cmd <= SIOCIWLAST) */ - switch(cmd) - { - case SIOCGIWSTATS: - /* Get Wireless Stats */ + switch (cmd) { + case SIOCGIWSTATS: + /* Get Wireless Stats */ + return ioctl_standard_call(dev, + ifr, + cmd, + &iw_handler_get_iwstats); + + case SIOCGIWPRIV: + /* Check if we have some wireless handlers defined */ + if (dev->wireless_handlers != NULL) { + /* We export to user space the definition of + * the private handler ourselves */ return ioctl_standard_call(dev, ifr, cmd, - &iw_handler_get_iwstats); - - case SIOCGIWPRIV: - /* Check if we have some wireless handlers defined */ - if(dev->wireless_handlers != NULL) { - /* We export to user space the definition of - * the private handler ourselves */ + &iw_handler_get_private); + } + // ## Fall-through for old API ## + default: + /* Generic IOCTL */ + /* Basic check */ + if (!netif_device_present(dev)) + return -ENODEV; + /* New driver API : try to find the handler */ + handler = get_handler(dev, cmd); + if (handler != NULL) { + /* Standard and private are not the same */ + if (cmd < SIOCIWFIRSTPRIV) return ioctl_standard_call(dev, ifr, cmd, - &iw_handler_get_private); - } - // ## Fall-through for old API ## - default: - /* Generic IOCTL */ - /* Basic check */ - if (!netif_device_present(dev)) - return -ENODEV; - /* New driver API : try to find the handler */ - handler = get_handler(dev, cmd); - if(handler != NULL) { - /* Standard and private are not the same */ - if(cmd < SIOCIWFIRSTPRIV) - return ioctl_standard_call(dev, - ifr, - cmd, - handler); - else - return ioctl_private_call(dev, - ifr, - cmd, - handler); - } - /* Old driver API : call driver ioctl handler */ - if (dev->do_ioctl) { - return dev->do_ioctl(dev, ifr, cmd); - } - return -EOPNOTSUPP; - } - /* Not reached */ - return -EINVAL; -} - -/********************** RTNETLINK REQUEST API **********************/ -/* - * The alternate user space API to configure all those Wireless Extensions - * is through RtNetlink. - * This API support only the new driver API (iw_handler). - * - * This RtNetlink API use the same query/reply model as the ioctl API. - * Maximum effort has been done to fit in the RtNetlink model, and - * we support both RtNetlink Set and RtNelink Get operations. - * On the other hand, we don't offer Dump operations because of the - * following reasons : - * o Large number of parameters, most optional - * o Large size of some parameters (> 100 bytes) - * o Each parameters need to be extracted from hardware - * o Scan requests can take seconds and disable network activity. - * Because of this high cost/overhead, we want to return only the - * parameters the user application is really interested in. - * We could offer partial Dump using the IW_DESCR_FLAG_DUMP flag. - * - * The API uses the standard RtNetlink socket. When the RtNetlink code - * find a IFLA_WIRELESS field in a RtNetlink SET_LINK request, - * it calls here. - */ - -#ifdef CONFIG_NET_WIRELESS_RTNETLINK -/* ---------------------------------------------------------------- */ -/* - * Wrapper to call a standard Wireless Extension GET handler. - * We do various checks and call the handler with the proper args. - */ -static int rtnetlink_standard_get(struct net_device * dev, - struct iw_event * request, - int request_len, - iw_handler handler, - char ** p_buf, - int * p_len) -{ - const struct iw_ioctl_description * descr = NULL; - unsigned int cmd; - union iwreq_data * wrqu; - int hdr_len; - struct iw_request_info info; - char * buffer = NULL; - int buffer_size = 0; - int ret = -EINVAL; - - /* Get the description of the Request */ - cmd = request->cmd; - if((cmd - SIOCIWFIRST) >= standard_ioctl_num) - return -EOPNOTSUPP; - descr = &(standard_ioctl[cmd - SIOCIWFIRST]); - -#ifdef WE_RTNETLINK_DEBUG - printk(KERN_DEBUG "%s (WE.r) : Found standard handler for 0x%04X\n", - dev->name, cmd); - printk(KERN_DEBUG "%s (WE.r) : Header type : %d, Token type : %d, size : %d, token : %d\n", dev->name, descr->header_type, descr->token_type, descr->token_size, descr->max_tokens); -#endif /* WE_RTNETLINK_DEBUG */ - - /* Check if wrqu is complete */ - hdr_len = event_type_size[descr->header_type]; - if(request_len < hdr_len) { -#ifdef WE_RTNETLINK_DEBUG - printk(KERN_DEBUG - "%s (WE.r) : Wireless request too short (%d)\n", - dev->name, request_len); -#endif /* WE_RTNETLINK_DEBUG */ - return -EINVAL; - } - - /* Prepare the call */ - info.cmd = cmd; - info.flags = 0; - - /* Check if we have extra data in the reply or not */ - if(descr->header_type != IW_HEADER_TYPE_POINT) { - - /* Create the kernel buffer that we will return. - * It's at an offset to match the TYPE_POINT case... */ - buffer_size = request_len + IW_EV_POINT_OFF; - buffer = kmalloc(buffer_size, GFP_KERNEL); - if (buffer == NULL) { - return -ENOMEM; - } - /* Copy event data */ - memcpy(buffer + IW_EV_POINT_OFF, request, request_len); - /* Use our own copy of wrqu */ - wrqu = (union iwreq_data *) (buffer + IW_EV_POINT_OFF - + IW_EV_LCP_PK_LEN); - - /* No extra arguments. Trivial to handle */ - ret = handler(dev, &info, wrqu, NULL); - - } else { - union iwreq_data wrqu_point; - char * extra = NULL; - int extra_size = 0; - - /* Get a temp copy of wrqu (skip pointer) */ - memcpy(((char *) &wrqu_point) + IW_EV_POINT_OFF, - ((char *) request) + IW_EV_LCP_PK_LEN, - IW_EV_POINT_LEN - IW_EV_LCP_PK_LEN); - - /* Calculate space needed by arguments. Always allocate - * for max space. Easier, and won't last long... */ - extra_size = descr->max_tokens * descr->token_size; - /* Support for very large requests */ - if((descr->flags & IW_DESCR_FLAG_NOMAX) && - (wrqu_point.data.length > descr->max_tokens)) - extra_size = (wrqu_point.data.length - * descr->token_size); - buffer_size = extra_size + IW_EV_POINT_PK_LEN + IW_EV_POINT_OFF; -#ifdef WE_RTNETLINK_DEBUG - printk(KERN_DEBUG "%s (WE.r) : Malloc %d bytes (%d bytes)\n", - dev->name, extra_size, buffer_size); -#endif /* WE_RTNETLINK_DEBUG */ - - /* Create the kernel buffer that we will return */ - buffer = kmalloc(buffer_size, GFP_KERNEL); - if (buffer == NULL) { - return -ENOMEM; - } - - /* Put wrqu in the right place (just before extra). - * Leave space for IWE header and dummy pointer... - * Note that IW_EV_LCP_PK_LEN==4 bytes, so it's still aligned. - */ - memcpy(buffer + IW_EV_LCP_PK_LEN + IW_EV_POINT_OFF, - ((char *) &wrqu_point) + IW_EV_POINT_OFF, - IW_EV_POINT_PK_LEN - IW_EV_LCP_PK_LEN); - wrqu = (union iwreq_data *) (buffer + IW_EV_LCP_PK_LEN); - - /* Extra comes logically after that. Offset +12 bytes. */ - extra = buffer + IW_EV_POINT_OFF + IW_EV_POINT_PK_LEN; - - /* Call the handler */ - ret = handler(dev, &info, wrqu, extra); - - /* Calculate real returned length */ - extra_size = (wrqu->data.length * descr->token_size); - /* Re-adjust reply size */ - request->len = extra_size + IW_EV_POINT_PK_LEN; - - /* Put the iwe header where it should, i.e. scrap the - * dummy pointer. */ - memcpy(buffer + IW_EV_POINT_OFF, request, IW_EV_LCP_PK_LEN); - -#ifdef WE_RTNETLINK_DEBUG - printk(KERN_DEBUG "%s (WE.r) : Reply 0x%04X, hdr_len %d, tokens %d, extra_size %d, buffer_size %d\n", dev->name, cmd, hdr_len, wrqu->data.length, extra_size, buffer_size); -#endif /* WE_RTNETLINK_DEBUG */ - - /* Check if there is enough buffer up there */ - if(wrqu_point.data.length < wrqu->data.length) - ret = -E2BIG; - } - - /* Return the buffer to the caller */ - if (!ret) { - *p_buf = buffer; - *p_len = request->len; - } else { - /* Cleanup */ - if(buffer) - kfree(buffer); - } - - return ret; -} - -/* ---------------------------------------------------------------- */ -/* - * Wrapper to call a standard Wireless Extension SET handler. - * We do various checks and call the handler with the proper args. - */ -static inline int rtnetlink_standard_set(struct net_device * dev, - struct iw_event * request, - int request_len, - iw_handler handler) -{ - const struct iw_ioctl_description * descr = NULL; - unsigned int cmd; - union iwreq_data * wrqu; - union iwreq_data wrqu_point; - int hdr_len; - char * extra = NULL; - int extra_size = 0; - struct iw_request_info info; - int ret = -EINVAL; - - /* Get the description of the Request */ - cmd = request->cmd; - if((cmd - SIOCIWFIRST) >= standard_ioctl_num) - return -EOPNOTSUPP; - descr = &(standard_ioctl[cmd - SIOCIWFIRST]); - -#ifdef WE_RTNETLINK_DEBUG - printk(KERN_DEBUG "%s (WE.r) : Found standard SET handler for 0x%04X\n", - dev->name, cmd); - printk(KERN_DEBUG "%s (WE.r) : Header type : %d, Token type : %d, size : %d, token : %d\n", dev->name, descr->header_type, descr->token_type, descr->token_size, descr->max_tokens); -#endif /* WE_RTNETLINK_DEBUG */ - - /* Extract fixed header from request. This is properly aligned. */ - wrqu = (union iwreq_data *) (((char *) request) + IW_EV_LCP_PK_LEN); - - /* Check if wrqu is complete */ - hdr_len = event_type_pk_size[descr->header_type]; - if(request_len < hdr_len) { -#ifdef WE_RTNETLINK_DEBUG - printk(KERN_DEBUG - "%s (WE.r) : Wireless request too short (%d)\n", - dev->name, request_len); -#endif /* WE_RTNETLINK_DEBUG */ - return -EINVAL; - } - - /* Prepare the call */ - info.cmd = cmd; - info.flags = 0; - - /* Check if we have extra data in the request or not */ - if(descr->header_type != IW_HEADER_TYPE_POINT) { - - /* No extra arguments. Trivial to handle */ - ret = handler(dev, &info, wrqu, NULL); - - } else { - int extra_len; - - /* Put wrqu in the right place (skip pointer) */ - memcpy(((char *) &wrqu_point) + IW_EV_POINT_OFF, - wrqu, IW_EV_POINT_PK_LEN - IW_EV_LCP_PK_LEN); - /* Don't forget about the event code... */ - wrqu = &wrqu_point; - - /* Check if number of token fits within bounds */ - if(wrqu_point.data.length > descr->max_tokens) - return -E2BIG; - if(wrqu_point.data.length < descr->min_tokens) - return -EINVAL; - - /* Real length of payload */ - extra_len = wrqu_point.data.length * descr->token_size; - - /* Check if request is self consistent */ - if((request_len - hdr_len) < extra_len) { -#ifdef WE_RTNETLINK_DEBUG - printk(KERN_DEBUG "%s (WE.r) : Wireless request data too short (%d)\n", - dev->name, extra_size); -#endif /* WE_RTNETLINK_DEBUG */ - return -EINVAL; - } - -#ifdef WE_RTNETLINK_DEBUG - printk(KERN_DEBUG "%s (WE.r) : Malloc %d bytes\n", - dev->name, extra_size); -#endif /* WE_RTNETLINK_DEBUG */ - - /* Always allocate for max space. Easier, and won't last - * long... */ - extra_size = descr->max_tokens * descr->token_size; - extra = kmalloc(extra_size, GFP_KERNEL); - if (extra == NULL) - return -ENOMEM; - - /* Copy extra in aligned buffer */ - memcpy(extra, ((char *) request) + hdr_len, extra_len); - - /* Call the handler */ - ret = handler(dev, &info, &wrqu_point, extra); - } - -#ifdef WE_SET_EVENT - /* Generate an event to notify listeners of the change */ - if((descr->flags & IW_DESCR_FLAG_EVENT) && - ((ret == 0) || (ret == -EIWCOMMIT))) { - if(descr->flags & IW_DESCR_FLAG_RESTRICT) - /* If the event is restricted, don't - * export the payload */ - wireless_send_event(dev, cmd, wrqu, NULL); - else - wireless_send_event(dev, cmd, wrqu, extra); - } -#endif /* WE_SET_EVENT */ - - /* Cleanup - I told you it wasn't that long ;-) */ - if(extra) - kfree(extra); - - /* Call commit handler if needed and defined */ - if(ret == -EIWCOMMIT) - ret = call_commit_handler(dev); - - return ret; -} - -/* ---------------------------------------------------------------- */ -/* - * Wrapper to call a private Wireless Extension GET handler. - * Same as above... - * It's not as nice and slimline as the standard wrapper. The cause - * is struct iw_priv_args, which was not really designed for the - * job we are going here. - * - * IMPORTANT : This function prevent to set and get data on the same - * IOCTL and enforce the SET/GET convention. Not doing it would be - * far too hairy... - * If you need to set and get data at the same time, please don't use - * a iw_handler but process it in your ioctl handler (i.e. use the - * old driver API). - */ -static inline int rtnetlink_private_get(struct net_device * dev, - struct iw_event * request, - int request_len, - iw_handler handler, - char ** p_buf, - int * p_len) -{ - const struct iw_priv_args * descr = NULL; - unsigned int cmd; - union iwreq_data * wrqu; - int hdr_len; - struct iw_request_info info; - int extra_size = 0; - int i; - char * buffer = NULL; - int buffer_size = 0; - int ret = -EINVAL; - - /* Get the description of the Request */ - cmd = request->cmd; - for(i = 0; i < dev->wireless_handlers->num_private_args; i++) - if(cmd == dev->wireless_handlers->private_args[i].cmd) { - descr = &(dev->wireless_handlers->private_args[i]); - break; - } - if(descr == NULL) - return -EOPNOTSUPP; - -#ifdef WE_RTNETLINK_DEBUG - printk(KERN_DEBUG "%s (WE.r) : Found private handler for 0x%04X\n", - dev->name, cmd); - printk(KERN_DEBUG "%s (WE.r) : Name %s, set %X, get %X\n", - dev->name, descr->name, descr->set_args, descr->get_args); -#endif /* WE_RTNETLINK_DEBUG */ - - /* Compute the max size of the get arguments */ - extra_size = get_priv_size(descr->get_args); - - /* Does it fits in wrqu ? */ - if((descr->get_args & IW_PRIV_SIZE_FIXED) && - (extra_size <= IFNAMSIZ)) { - hdr_len = extra_size; - extra_size = 0; - } else { - hdr_len = IW_EV_POINT_PK_LEN; - } - - /* Check if wrqu is complete */ - if(request_len < hdr_len) { -#ifdef WE_RTNETLINK_DEBUG - printk(KERN_DEBUG - "%s (WE.r) : Wireless request too short (%d)\n", - dev->name, request_len); -#endif /* WE_RTNETLINK_DEBUG */ - return -EINVAL; - } - - /* Prepare the call */ - info.cmd = cmd; - info.flags = 0; - - /* Check if we have a pointer to user space data or not. */ - if(extra_size == 0) { - - /* Create the kernel buffer that we will return. - * It's at an offset to match the TYPE_POINT case... */ - buffer_size = request_len + IW_EV_POINT_OFF; - buffer = kmalloc(buffer_size, GFP_KERNEL); - if (buffer == NULL) { - return -ENOMEM; - } - /* Copy event data */ - memcpy(buffer + IW_EV_POINT_OFF, request, request_len); - /* Use our own copy of wrqu */ - wrqu = (union iwreq_data *) (buffer + IW_EV_POINT_OFF - + IW_EV_LCP_PK_LEN); - - /* No extra arguments. Trivial to handle */ - ret = handler(dev, &info, wrqu, (char *) wrqu); - - } else { - char * extra; - - /* Buffer for full reply */ - buffer_size = extra_size + IW_EV_POINT_PK_LEN + IW_EV_POINT_OFF; - -#ifdef WE_RTNETLINK_DEBUG - printk(KERN_DEBUG "%s (WE.r) : Malloc %d bytes (%d bytes)\n", - dev->name, extra_size, buffer_size); -#endif /* WE_RTNETLINK_DEBUG */ - - /* Create the kernel buffer that we will return */ - buffer = kmalloc(buffer_size, GFP_KERNEL); - if (buffer == NULL) { - return -ENOMEM; - } - - /* Put wrqu in the right place (just before extra). - * Leave space for IWE header and dummy pointer... - * Note that IW_EV_LCP_PK_LEN==4 bytes, so it's still aligned. - */ - memcpy(buffer + IW_EV_LCP_PK_LEN + IW_EV_POINT_OFF, - ((char *) request) + IW_EV_LCP_PK_LEN, - IW_EV_POINT_PK_LEN - IW_EV_LCP_PK_LEN); - wrqu = (union iwreq_data *) (buffer + IW_EV_LCP_PK_LEN); - - /* Extra comes logically after that. Offset +12 bytes. */ - extra = buffer + IW_EV_POINT_OFF + IW_EV_POINT_PK_LEN; - - /* Call the handler */ - ret = handler(dev, &info, wrqu, extra); - - /* Adjust for the actual length if it's variable, - * avoid leaking kernel bits outside. */ - if (!(descr->get_args & IW_PRIV_SIZE_FIXED)) - extra_size = adjust_priv_size(descr->get_args, wrqu); - /* Re-adjust reply size */ - request->len = extra_size + IW_EV_POINT_PK_LEN; - - /* Put the iwe header where it should, i.e. scrap the - * dummy pointer. */ - memcpy(buffer + IW_EV_POINT_OFF, request, IW_EV_LCP_PK_LEN); - -#ifdef WE_RTNETLINK_DEBUG - printk(KERN_DEBUG "%s (WE.r) : Reply 0x%04X, hdr_len %d, tokens %d, extra_size %d, buffer_size %d\n", dev->name, cmd, hdr_len, wrqu->data.length, extra_size, buffer_size); -#endif /* WE_RTNETLINK_DEBUG */ - } - - /* Return the buffer to the caller */ - if (!ret) { - *p_buf = buffer; - *p_len = request->len; - } else { - /* Cleanup */ - if(buffer) - kfree(buffer); - } - - return ret; -} - -/* ---------------------------------------------------------------- */ -/* - * Wrapper to call a private Wireless Extension SET handler. - * Same as above... - * It's not as nice and slimline as the standard wrapper. The cause - * is struct iw_priv_args, which was not really designed for the - * job we are going here. - * - * IMPORTANT : This function prevent to set and get data on the same - * IOCTL and enforce the SET/GET convention. Not doing it would be - * far too hairy... - * If you need to set and get data at the same time, please don't use - * a iw_handler but process it in your ioctl handler (i.e. use the - * old driver API). - */ -static inline int rtnetlink_private_set(struct net_device * dev, - struct iw_event * request, - int request_len, - iw_handler handler) -{ - const struct iw_priv_args * descr = NULL; - unsigned int cmd; - union iwreq_data * wrqu; - union iwreq_data wrqu_point; - int hdr_len; - char * extra = NULL; - int extra_size = 0; - int offset = 0; /* For sub-ioctls */ - struct iw_request_info info; - int i; - int ret = -EINVAL; - - /* Get the description of the Request */ - cmd = request->cmd; - for(i = 0; i < dev->wireless_handlers->num_private_args; i++) - if(cmd == dev->wireless_handlers->private_args[i].cmd) { - descr = &(dev->wireless_handlers->private_args[i]); - break; - } - if(descr == NULL) - return -EOPNOTSUPP; - -#ifdef WE_RTNETLINK_DEBUG - printk(KERN_DEBUG "%s (WE.r) : Found private handler for 0x%04X\n", - ifr->ifr_name, cmd); - printk(KERN_DEBUG "%s (WE.r) : Name %s, set %X, get %X\n", - dev->name, descr->name, descr->set_args, descr->get_args); -#endif /* WE_RTNETLINK_DEBUG */ - - /* Compute the size of the set arguments */ - /* Check for sub-ioctl handler */ - if(descr->name[0] == '\0') - /* Reserve one int for sub-ioctl index */ - offset = sizeof(__u32); - - /* Size of set arguments */ - extra_size = get_priv_size(descr->set_args); - - /* Does it fits in wrqu ? */ - if((descr->set_args & IW_PRIV_SIZE_FIXED) && - (extra_size <= IFNAMSIZ)) { - hdr_len = IW_EV_LCP_PK_LEN + extra_size; - extra_size = 0; - } else { - hdr_len = IW_EV_POINT_PK_LEN; - } - - /* Extract fixed header from request. This is properly aligned. */ - wrqu = (union iwreq_data *) (((char *) request) + IW_EV_LCP_PK_LEN); - - /* Check if wrqu is complete */ - if(request_len < hdr_len) { -#ifdef WE_RTNETLINK_DEBUG - printk(KERN_DEBUG - "%s (WE.r) : Wireless request too short (%d)\n", - dev->name, request_len); -#endif /* WE_RTNETLINK_DEBUG */ - return -EINVAL; - } - - /* Prepare the call */ - info.cmd = cmd; - info.flags = 0; - - /* Check if we have a pointer to user space data or not. */ - if(extra_size == 0) { - - /* No extra arguments. Trivial to handle */ - ret = handler(dev, &info, wrqu, (char *) wrqu); - - } else { - int extra_len; - - /* Put wrqu in the right place (skip pointer) */ - memcpy(((char *) &wrqu_point) + IW_EV_POINT_OFF, - wrqu, IW_EV_POINT_PK_LEN - IW_EV_LCP_PK_LEN); - - /* Does it fits within bounds ? */ - if(wrqu_point.data.length > (descr->set_args & - IW_PRIV_SIZE_MASK)) - return -E2BIG; - - /* Real length of payload */ - extra_len = adjust_priv_size(descr->set_args, &wrqu_point); - - /* Check if request is self consistent */ - if((request_len - hdr_len) < extra_len) { -#ifdef WE_RTNETLINK_DEBUG - printk(KERN_DEBUG "%s (WE.r) : Wireless request data too short (%d)\n", - dev->name, extra_size); -#endif /* WE_RTNETLINK_DEBUG */ - return -EINVAL; + handler); + else + return ioctl_private_call(dev, + ifr, + cmd, + handler); + } + /* Old driver API : call driver ioctl handler */ + if (dev->do_ioctl) { + return dev->do_ioctl(dev, ifr, cmd); } - -#ifdef WE_RTNETLINK_DEBUG - printk(KERN_DEBUG "%s (WE.r) : Malloc %d bytes\n", - dev->name, extra_size); -#endif /* WE_RTNETLINK_DEBUG */ - - /* Always allocate for max space. Easier, and won't last - * long... */ - extra = kmalloc(extra_size, GFP_KERNEL); - if (extra == NULL) - return -ENOMEM; - - /* Copy extra in aligned buffer */ - memcpy(extra, ((char *) request) + hdr_len, extra_len); - - /* Call the handler */ - ret = handler(dev, &info, &wrqu_point, extra); - - /* Cleanup - I told you it wasn't that long ;-) */ - kfree(extra); - } - - /* Call commit handler if needed and defined */ - if(ret == -EIWCOMMIT) - ret = call_commit_handler(dev); - - return ret; -} - -/* ---------------------------------------------------------------- */ -/* - * Main RtNetlink dispatcher. Called from the main networking code - * (do_getlink() in net/core/rtnetlink.c). - * Check the type of Request and call the appropriate wrapper... - */ -int wireless_rtnetlink_get(struct net_device * dev, - char * data, - int len, - char ** p_buf, - int * p_len) -{ - struct iw_event * request = (struct iw_event *) data; - iw_handler handler; - - /* Check length */ - if(len < IW_EV_LCP_PK_LEN) { - printk(KERN_DEBUG "%s (WE.r) : RtNetlink request too short (%d)\n", - dev->name, len); - return -EINVAL; - } - - /* ReCheck length (len may have padding) */ - if(request->len > len) { - printk(KERN_DEBUG "%s (WE.r) : RtNetlink request len invalid (%d-%d)\n", - dev->name, request->len, len); - return -EINVAL; - } - - /* Only accept GET requests in here */ - if(!IW_IS_GET(request->cmd)) - return -EOPNOTSUPP; - - /* If command is `get the encoding parameters', check if - * the user has the right to do it */ - if (request->cmd == SIOCGIWENCODE || - request->cmd == SIOCGIWENCODEEXT) { - if (!capable(CAP_NET_ADMIN)) - return -EPERM; - } - - /* Special cases */ - if(request->cmd == SIOCGIWSTATS) - /* Get Wireless Stats */ - return rtnetlink_standard_get(dev, - request, - request->len, - &iw_handler_get_iwstats, - p_buf, p_len); - if(request->cmd == SIOCGIWPRIV) { - /* Check if we have some wireless handlers defined */ - if(dev->wireless_handlers == NULL) - return -EOPNOTSUPP; - /* Get Wireless Stats */ - return rtnetlink_standard_get(dev, - request, - request->len, - &iw_handler_get_private, - p_buf, p_len); - } - - /* Basic check */ - if (!netif_device_present(dev)) - return -ENODEV; - - /* Try to find the handler */ - handler = get_handler(dev, request->cmd); - if(handler != NULL) { - /* Standard and private are not the same */ - if(request->cmd < SIOCIWFIRSTPRIV) - return rtnetlink_standard_get(dev, - request, - request->len, - handler, - p_buf, p_len); - else - return rtnetlink_private_get(dev, - request, - request->len, - handler, - p_buf, p_len); - } - - return -EOPNOTSUPP; -} - -/* ---------------------------------------------------------------- */ -/* - * Main RtNetlink dispatcher. Called from the main networking code - * (do_setlink() in net/core/rtnetlink.c). - * Check the type of Request and call the appropriate wrapper... - */ -int wireless_rtnetlink_set(struct net_device * dev, - char * data, - int len) -{ - struct iw_event * request = (struct iw_event *) data; - iw_handler handler; - - /* Check length */ - if(len < IW_EV_LCP_PK_LEN) { - printk(KERN_DEBUG "%s (WE.r) : RtNetlink request too short (%d)\n", - dev->name, len); - return -EINVAL; - } - - /* ReCheck length (len may have padding) */ - if(request->len > len) { - printk(KERN_DEBUG "%s (WE.r) : RtNetlink request len invalid (%d-%d)\n", - dev->name, request->len, len); - return -EINVAL; - } - - /* Only accept SET requests in here */ - if(!IW_IS_SET(request->cmd)) return -EOPNOTSUPP; - - /* Basic check */ - if (!netif_device_present(dev)) - return -ENODEV; - - /* New driver API : try to find the handler */ - handler = get_handler(dev, request->cmd); - if(handler != NULL) { - /* Standard and private are not the same */ - if(request->cmd < SIOCIWFIRSTPRIV) - return rtnetlink_standard_set(dev, - request, - request->len, - handler); - else - return rtnetlink_private_set(dev, - request, - request->len, - handler); } - - return -EOPNOTSUPP; + /* Not reached */ + return -EINVAL; } -#endif /* CONFIG_NET_WIRELESS_RTNETLINK */ /************************* EVENT PROCESSING *************************/ @@ -1941,7 +1203,7 @@ static inline int rtnetlink_fill_iwinfo( { struct ifinfomsg *r; struct nlmsghdr *nlh; - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); nlh = NLMSG_PUT(skb, 0, 0, type, sizeof(*r)); r = NLMSG_DATA(nlh); @@ -1955,12 +1217,12 @@ static inline int rtnetlink_fill_iwinfo( /* Add the wireless events in the netlink packet */ RTA_PUT(skb, IFLA_WIRELESS, event_len, event); - nlh->nlmsg_len = skb->tail - b; + nlh->nlmsg_len = skb_tail_pointer(skb) - b; return skb->len; nlmsg_failure: rtattr_failure: - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); return -1; } @@ -2015,17 +1277,17 @@ void wireless_send_event(struct net_devi unsigned cmd_index; /* *MUST* be unsigned */ /* Get the description of the Event */ - if(cmd <= SIOCIWLAST) { + if (cmd <= SIOCIWLAST) { cmd_index = cmd - SIOCIWFIRST; - if(cmd_index < standard_ioctl_num) + if (cmd_index < standard_ioctl_num) descr = &(standard_ioctl[cmd_index]); } else { cmd_index = cmd - IWEVFIRST; - if(cmd_index < standard_event_num) + if (cmd_index < standard_event_num) descr = &(standard_event[cmd_index]); } /* Don't accept unknown events */ - if(descr == NULL) { + if (descr == NULL) { /* Note : we don't return an error to the driver, because * the driver would not know what to do about it. It can't * return an error to the user, because the event is not @@ -2044,18 +1306,18 @@ void wireless_send_event(struct net_devi #endif /* WE_EVENT_DEBUG */ /* Check extra parameters and set extra_len */ - if(descr->header_type == IW_HEADER_TYPE_POINT) { + if (descr->header_type == IW_HEADER_TYPE_POINT) { /* Check if number of token fits within bounds */ - if(wrqu->data.length > descr->max_tokens) { + if (wrqu->data.length > descr->max_tokens) { printk(KERN_ERR "%s (WE) : Wireless Event too big (%d)\n", dev->name, wrqu->data.length); return; } - if(wrqu->data.length < descr->min_tokens) { + if (wrqu->data.length < descr->min_tokens) { printk(KERN_ERR "%s (WE) : Wireless Event too small (%d)\n", dev->name, wrqu->data.length); return; } /* Calculate extra_len - extra is NULL for restricted events */ - if(extra != NULL) + if (extra != NULL) extra_len = wrqu->data.length * descr->token_size; /* Always at an offset in wrqu */ wrqu_off = IW_EV_POINT_OFF; @@ -2074,14 +1336,14 @@ void wireless_send_event(struct net_devi /* Create temporary buffer to hold the event */ event = kmalloc(event_len, GFP_ATOMIC); - if(event == NULL) + if (event == NULL) return; /* Fill event */ event->len = event_len; event->cmd = cmd; memcpy(&event->u, ((char *) wrqu) + wrqu_off, hdr_len - IW_EV_LCP_LEN); - if(extra != NULL) + if (extra != NULL) memcpy(((char *) event) + hdr_len, extra, extra_len); #ifdef WE_EVENT_RTNETLINK @@ -2116,7 +1378,7 @@ void wireless_send_event(struct net_devi static inline struct iw_spy_data * get_spydata(struct net_device *dev) { /* This is the new way */ - if(dev->wireless_data) + if (dev->wireless_data) return(dev->wireless_data->spy_data); return NULL; } @@ -2134,7 +1396,7 @@ int iw_handler_set_spy(struct net_device struct sockaddr * address = (struct sockaddr *) extra; /* Make sure driver is not buggy or using the old API */ - if(!spydata) + if (!spydata) return -EOPNOTSUPP; /* Disable spy collection while we copy the addresses. @@ -2151,11 +1413,11 @@ int iw_handler_set_spy(struct net_device smp_wmb(); /* Are there are addresses to copy? */ - if(wrqu->data.length > 0) { + if (wrqu->data.length > 0) { int i; /* Copy addresses */ - for(i = 0; i < wrqu->data.length; i++) + for (i = 0; i < wrqu->data.length; i++) memcpy(spydata->spy_address[i], address[i].sa_data, ETH_ALEN); /* Reset stats */ @@ -2199,23 +1461,23 @@ int iw_handler_get_spy(struct net_device int i; /* Make sure driver is not buggy or using the old API */ - if(!spydata) + if (!spydata) return -EOPNOTSUPP; wrqu->data.length = spydata->spy_number; /* Copy addresses. */ - for(i = 0; i < spydata->spy_number; i++) { + for (i = 0; i < spydata->spy_number; i++) { memcpy(address[i].sa_data, spydata->spy_address[i], ETH_ALEN); address[i].sa_family = AF_UNIX; } /* Copy stats to the user buffer (just after). */ - if(spydata->spy_number > 0) + if (spydata->spy_number > 0) memcpy(extra + (sizeof(struct sockaddr) *spydata->spy_number), spydata->spy_stat, sizeof(struct iw_quality) * spydata->spy_number); /* Reset updated flags. */ - for(i = 0; i < spydata->spy_number; i++) + for (i = 0; i < spydata->spy_number; i++) spydata->spy_stat[i].updated &= ~IW_QUAL_ALL_UPDATED; return 0; } @@ -2233,7 +1495,7 @@ int iw_handler_set_thrspy(struct net_dev struct iw_thrspy * threshold = (struct iw_thrspy *) extra; /* Make sure driver is not buggy or using the old API */ - if(!spydata) + if (!spydata) return -EOPNOTSUPP; /* Just do it */ @@ -2263,7 +1525,7 @@ int iw_handler_get_thrspy(struct net_dev struct iw_thrspy * threshold = (struct iw_thrspy *) extra; /* Make sure driver is not buggy or using the old API */ - if(!spydata) + if (!spydata) return -EOPNOTSUPP; /* Just do it */ @@ -2327,7 +1589,7 @@ void wireless_spy_update(struct net_devi int match = -1; /* Make sure driver is not buggy or using the old API */ - if(!spydata) + if (!spydata) return; #ifdef WE_SPY_DEBUG @@ -2335,8 +1597,8 @@ void wireless_spy_update(struct net_devi #endif /* WE_SPY_DEBUG */ /* Update all records that match */ - for(i = 0; i < spydata->spy_number; i++) - if(!compare_ether_addr(address, spydata->spy_address[i])) { + for (i = 0; i < spydata->spy_number; i++) + if (!compare_ether_addr(address, spydata->spy_address[i])) { memcpy(&(spydata->spy_stat[i]), wstats, sizeof(struct iw_quality)); match = i; @@ -2346,15 +1608,15 @@ void wireless_spy_update(struct net_devi * To avoid event storms, we have a simple hysteresis : we generate * event only when we go under the low threshold or above the * high threshold. */ - if(match >= 0) { - if(spydata->spy_thr_under[match]) { - if(wstats->level > spydata->spy_thr_high.level) { + if (match >= 0) { + if (spydata->spy_thr_under[match]) { + if (wstats->level > spydata->spy_thr_high.level) { spydata->spy_thr_under[match] = 0; iw_send_thrspy_event(dev, spydata, address, wstats); } } else { - if(wstats->level < spydata->spy_thr_low.level) { + if (wstats->level < spydata->spy_thr_low.level) { spydata->spy_thr_under[match] = 1; iw_send_thrspy_event(dev, spydata, address, wstats); diff -puN net/dccp/ackvec.c~git-net net/dccp/ackvec.c --- a/net/dccp/ackvec.c~git-net +++ a/net/dccp/ackvec.c @@ -157,7 +157,7 @@ struct dccp_ackvec *dccp_ackvec_alloc(co if (av != NULL) { av->dccpav_buf_head = DCCP_MAX_ACKVEC_LEN - 1; - av->dccpav_buf_ackno = DCCP_MAX_SEQNO + 1; + av->dccpav_buf_ackno = UINT48_MAX + 1; av->dccpav_buf_nonce = av->dccpav_buf_nonce = 0; av->dccpav_time.tv_sec = 0; av->dccpav_time.tv_usec = 0; diff -puN net/dccp/ccids/ccid3.c~git-net net/dccp/ccids/ccid3.c --- a/net/dccp/ccids/ccid3.c~git-net +++ a/net/dccp/ccids/ccid3.c @@ -33,7 +33,6 @@ * along with this program; if not, write to the Free Software * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */ - #include "../ccid.h" #include "../dccp.h" #include "lib/packet_history.h" @@ -52,6 +51,9 @@ static struct dccp_tx_hist *ccid3_tx_his static struct dccp_rx_hist *ccid3_rx_hist; static struct dccp_li_hist *ccid3_li_hist; +/* + * Transmitter Half-Connection Routines + */ #ifdef CONFIG_IP_DCCP_CCID3_DEBUG static const char *ccid3_tx_state_name(enum ccid3_hc_tx_states state) { @@ -80,23 +82,37 @@ static void ccid3_hc_tx_set_state(struct } /* - * Recalculate scheduled nominal send time t_nom, inter-packet interval - * t_ipi, and delta value. Should be called after each change to X. + * Compute the initial sending rate X_init according to RFC 3390: + * w_init = min(4 * MSS, max(2 * MSS, 4380 bytes)) + * X_init = w_init / RTT + * For consistency with other parts of the code, X_init is scaled by 2^6. */ -static inline void ccid3_update_send_time(struct ccid3_hc_tx_sock *hctx) +static inline u64 rfc3390_initial_rate(struct sock *sk) { - timeval_sub_usecs(&hctx->ccid3hctx_t_nom, hctx->ccid3hctx_t_ipi); + const struct dccp_sock *dp = dccp_sk(sk); + const __u32 w_init = min(4 * dp->dccps_mss_cache, + max(2 * dp->dccps_mss_cache, 4380U)); - /* Calculate new t_ipi = s / X_inst (X_inst is in 64 * bytes/second) */ - hctx->ccid3hctx_t_ipi = scaled_div(hctx->ccid3hctx_s, - hctx->ccid3hctx_x >> 6); + return scaled_div(w_init << 6, ccid3_hc_tx_sk(sk)->ccid3hctx_rtt); +} - /* Update nominal send time with regard to the new t_ipi */ - timeval_add_usecs(&hctx->ccid3hctx_t_nom, hctx->ccid3hctx_t_ipi); +/* + * Recalculate t_ipi and delta (should be called whenever X changes) + */ +static inline void ccid3_update_send_interval(struct ccid3_hc_tx_sock *hctx) +{ + /* Calculate new t_ipi = s / X_inst (X_inst is in 64 * bytes/second) */ + hctx->ccid3hctx_t_ipi = scaled_div32(((u64)hctx->ccid3hctx_s) << 6, + hctx->ccid3hctx_x); /* Calculate new delta by delta = min(t_ipi / 2, t_gran / 2) */ hctx->ccid3hctx_delta = min_t(u32, hctx->ccid3hctx_t_ipi / 2, TFRC_OPSYS_HALF_TIME_GRAN); + + ccid3_pr_debug("t_ipi=%u, delta=%u, s=%u, X=%u\n", + hctx->ccid3hctx_t_ipi, hctx->ccid3hctx_delta, + hctx->ccid3hctx_s, (unsigned)(hctx->ccid3hctx_x >> 6)); + } /* * Update X by @@ -112,19 +128,28 @@ static inline void ccid3_update_send_tim * fine-grained resolution of sending rates. This requires scaling by 2^6 * throughout the code. Only X_calc is unscaled (in bytes/second). * - * If X has changed, we also update the scheduled send time t_now, - * the inter-packet interval t_ipi, and the delta value. */ static void ccid3_hc_tx_update_x(struct sock *sk, struct timeval *now) { struct ccid3_hc_tx_sock *hctx = ccid3_hc_tx_sk(sk); + __u64 min_rate = 2 * hctx->ccid3hctx_x_recv; const __u64 old_x = hctx->ccid3hctx_x; + /* + * Handle IDLE periods: do not reduce below RFC3390 initial sending rate + * when idling [RFC 4342, 5.1]. See also draft-ietf-dccp-rfc3448bis. + * For consistency with X and X_recv, min_rate is also scaled by 2^6. + */ + if (unlikely(hctx->ccid3hctx_idle)) { + min_rate = rfc3390_initial_rate(sk); + min_rate = max(min_rate, 2 * hctx->ccid3hctx_x_recv); + } + if (hctx->ccid3hctx_p > 0) { hctx->ccid3hctx_x = min(((__u64)hctx->ccid3hctx_x_calc) << 6, - hctx->ccid3hctx_x_recv * 2); + min_rate); hctx->ccid3hctx_x = max(hctx->ccid3hctx_x, (((__u64)hctx->ccid3hctx_s) << 6) / TFRC_T_MBI); @@ -133,14 +158,21 @@ static void ccid3_hc_tx_update_x(struct (suseconds_t)hctx->ccid3hctx_rtt >= 0) { hctx->ccid3hctx_x = - max(2 * min(hctx->ccid3hctx_x, hctx->ccid3hctx_x_recv), + max(min(2 * hctx->ccid3hctx_x, min_rate), scaled_div(((__u64)hctx->ccid3hctx_s) << 6, hctx->ccid3hctx_rtt)); hctx->ccid3hctx_t_ld = *now; } - if (hctx->ccid3hctx_x != old_x) - ccid3_update_send_time(hctx); + if (hctx->ccid3hctx_x != old_x) { + ccid3_pr_debug("X_prev=%u, X_now=%u, X_calc=%u, " + "X_recv=%u\n", (unsigned)(old_x >> 6), + (unsigned)(hctx->ccid3hctx_x >> 6), + hctx->ccid3hctx_x_calc, + (unsigned)(hctx->ccid3hctx_x_recv >> 6)); + + ccid3_update_send_interval(hctx); + } } /* @@ -149,17 +181,12 @@ static void ccid3_hc_tx_update_x(struct */ static inline void ccid3_hc_tx_update_s(struct ccid3_hc_tx_sock *hctx, int len) { - if (unlikely(len == 0)) - ccid3_pr_debug("Packet payload length is 0 - not updating\n"); - else - hctx->ccid3hctx_s = hctx->ccid3hctx_s == 0 ? len : - (9 * hctx->ccid3hctx_s + len) / 10; - /* - * Note: We could do a potential optimisation here - when `s' changes, - * recalculate sending rate and consequently t_ipi, t_delta, and - * t_now. This is however non-standard, and the benefits are not - * clear, so it is currently left out. - */ + const u16 old_s = hctx->ccid3hctx_s; + + hctx->ccid3hctx_s = old_s == 0 ? len : (9 * old_s + len) / 10; + + if (hctx->ccid3hctx_s != old_s) + ccid3_update_send_interval(hctx); } /* @@ -193,6 +220,7 @@ static void ccid3_hc_tx_no_feedback_time { struct sock *sk = (struct sock *)data; struct ccid3_hc_tx_sock *hctx = ccid3_hc_tx_sk(sk); + struct timeval now; unsigned long t_nfb = USEC_PER_SEC / 5; bh_lock_sock(sk); @@ -205,6 +233,8 @@ static void ccid3_hc_tx_no_feedback_time ccid3_pr_debug("%s(%p, state=%s) - entry \n", dccp_role(sk), sk, ccid3_tx_state_name(hctx->ccid3hctx_state)); + hctx->ccid3hctx_idle = 1; + switch (hctx->ccid3hctx_state) { case TFRC_SSTATE_NO_FBACK: /* RFC 3448, 4.4: Halve send rate directly */ @@ -219,53 +249,37 @@ static void ccid3_hc_tx_no_feedback_time /* The value of R is still undefined and so we can not recompute * the timout value. Keep initial value as per [RFC 4342, 5]. */ t_nfb = TFRC_INITIAL_TIMEOUT; - ccid3_update_send_time(hctx); + ccid3_update_send_interval(hctx); break; case TFRC_SSTATE_FBACK: /* - * Check if IDLE since last timeout and recv rate is less than - * 4 packets (in units of 64*bytes/sec) per RTT + * Modify the cached value of X_recv [RFC 3448, 4.4] + * + * If (p == 0 || X_calc > 2 * X_recv) + * X_recv = max(X_recv / 2, s / (2 * t_mbi)); + * Else + * X_recv = X_calc / 4; + * + * Note that X_recv is scaled by 2^6 while X_calc is not */ - if (!hctx->ccid3hctx_idle || - (hctx->ccid3hctx_x_recv >= 4 * - scaled_div(((__u64)hctx->ccid3hctx_s) << 6, - hctx->ccid3hctx_rtt))) { - struct timeval now; + BUG_ON(hctx->ccid3hctx_p && !hctx->ccid3hctx_x_calc); - ccid3_pr_debug("%s(%p, state=%s), not idle\n", - dccp_role(sk), sk, - ccid3_tx_state_name(hctx->ccid3hctx_state)); + if (hctx->ccid3hctx_p == 0 || + (hctx->ccid3hctx_x_calc > (hctx->ccid3hctx_x_recv >> 5))) { - /* - * Modify the cached value of X_recv [RFC 3448, 4.4] - * - * If (p == 0 || X_calc > 2 * X_recv) - * X_recv = max(X_recv / 2, s / (2 * t_mbi)); - * Else - * X_recv = X_calc / 4; - * - * Note that X_recv is scaled by 2^6 while X_calc is not - */ - BUG_ON(hctx->ccid3hctx_p && !hctx->ccid3hctx_x_calc); + hctx->ccid3hctx_x_recv = + max(hctx->ccid3hctx_x_recv / 2, + (((__u64)hctx->ccid3hctx_s) << 6) / + (2 * TFRC_T_MBI)); - if (hctx->ccid3hctx_p == 0 || - (hctx->ccid3hctx_x_calc > - (hctx->ccid3hctx_x_recv >> 5))) { - - hctx->ccid3hctx_x_recv = - max(hctx->ccid3hctx_x_recv / 2, - (((__u64)hctx->ccid3hctx_s) << 6) / - (2 * TFRC_T_MBI)); - - if (hctx->ccid3hctx_p == 0) - dccp_timestamp(sk, &now); - } else { - hctx->ccid3hctx_x_recv = hctx->ccid3hctx_x_calc; - hctx->ccid3hctx_x_recv <<= 4; - } - /* Now recalculate X [RFC 3448, 4.3, step (4)] */ - ccid3_hc_tx_update_x(sk, &now); + if (hctx->ccid3hctx_p == 0) + dccp_timestamp(sk, &now); + } else { + hctx->ccid3hctx_x_recv = hctx->ccid3hctx_x_calc; + hctx->ccid3hctx_x_recv <<= 4; } + /* Now recalculate X [RFC 3448, 4.3, step (4)] */ + ccid3_hc_tx_update_x(sk, &now); /* * Schedule no feedback timer to expire in * max(t_RTO, 2 * s/X) = max(t_RTO, 2 * t_ipi) @@ -280,8 +294,6 @@ static void ccid3_hc_tx_no_feedback_time goto out; } - hctx->ccid3hctx_idle = 1; - restart_timer: sk_reset_timer(sk, &hctx->ccid3hctx_no_feedback_timer, jiffies + usecs_to_jiffies(t_nfb)); @@ -322,24 +334,35 @@ static int ccid3_hc_tx_send_packet(struc usecs_to_jiffies(TFRC_INITIAL_TIMEOUT))); hctx->ccid3hctx_last_win_count = 0; hctx->ccid3hctx_t_last_win_count = now; - ccid3_hc_tx_set_state(sk, TFRC_SSTATE_NO_FBACK); - - /* Set initial sending rate X/s to 1pps (X is scaled by 2^6) */ - ccid3_hc_tx_update_s(hctx, skb->len); - hctx->ccid3hctx_x = hctx->ccid3hctx_s; - hctx->ccid3hctx_x <<= 6; - - /* First timeout, according to [RFC 3448, 4.2], is 1 second */ - hctx->ccid3hctx_t_ipi = USEC_PER_SEC; - /* Initial delta: minimum of 0.5 sec and t_gran/2 */ - hctx->ccid3hctx_delta = TFRC_OPSYS_HALF_TIME_GRAN; /* Set t_0 for initial packet */ hctx->ccid3hctx_t_nom = now; + + hctx->ccid3hctx_s = skb->len; + + /* + * Use initial RTT sample when available: recommended by erratum + * to RFC 4342. This implements the initialisation procedure of + * draft rfc3448bis, section 4.2. Remember, X is scaled by 2^6. + */ + if (dp->dccps_syn_rtt) { + ccid3_pr_debug("SYN RTT = %uus\n", dp->dccps_syn_rtt); + hctx->ccid3hctx_rtt = dp->dccps_syn_rtt; + hctx->ccid3hctx_x = rfc3390_initial_rate(sk); + hctx->ccid3hctx_t_ld = now; + } else { + /* Sender does not have RTT sample: X = MSS/second */ + hctx->ccid3hctx_x = dp->dccps_mss_cache; + hctx->ccid3hctx_x <<= 6; + } + ccid3_update_send_interval(hctx); + + ccid3_hc_tx_set_state(sk, TFRC_SSTATE_NO_FBACK); break; case TFRC_SSTATE_NO_FBACK: case TFRC_SSTATE_FBACK: delay = timeval_delta(&hctx->ccid3hctx_t_nom, &now); + ccid3_pr_debug("delay=%ld\n", (long)delay); /* * Scheduling of packet transmissions [RFC 3448, 4.6] * @@ -361,6 +384,7 @@ static int ccid3_hc_tx_send_packet(struc /* prepare to send now (add options etc.) */ dp->dccps_hc_tx_insert_options = 1; DCCP_SKB_CB(skb)->dccpd_ccval = hctx->ccid3hctx_last_win_count; + hctx->ccid3hctx_idle = 0; /* set the nominal send time for the next following packet */ timeval_add_usecs(&hctx->ccid3hctx_t_nom, hctx->ccid3hctx_t_ipi); @@ -391,7 +415,6 @@ static void ccid3_hc_tx_packet_sent(stru packet->dccphtx_seqno = dccp_sk(sk)->dccps_gss; packet->dccphtx_rtt = hctx->ccid3hctx_rtt; packet->dccphtx_sent = 1; - hctx->ccid3hctx_idle = 0; } static void ccid3_hc_tx_packet_recv(struct sock *sk, struct sk_buff *skb) @@ -402,8 +425,7 @@ static void ccid3_hc_tx_packet_recv(stru struct dccp_tx_hist_entry *packet; struct timeval now; unsigned long t_nfb; - u32 pinv; - suseconds_t r_sample, t_elapsed; + u32 pinv, r_sample; BUG_ON(hctx == NULL); @@ -445,18 +467,10 @@ static void ccid3_hc_tx_packet_recv(stru * Calculate new round trip sample as per [RFC 3448, 4.3] by * R_sample = (now - t_recvdata) - t_elapsed */ - r_sample = timeval_delta(&now, &packet->dccphtx_tstamp); - t_elapsed = dp->dccps_options_received.dccpor_elapsed_time * 10; - - DCCP_BUG_ON(r_sample < 0); - if (unlikely(r_sample <= t_elapsed)) - DCCP_WARN("WARNING: r_sample=%dus <= t_elapsed=%dus\n", - (int)r_sample, (int)t_elapsed); - else - r_sample -= t_elapsed; - CCID3_RTT_SANITY_CHECK(r_sample); + r_sample = dccp_sample_rtt(sk, &now, &packet->dccphtx_tstamp); - /* Update RTT estimate by + /* + * Update RTT estimate by * If (No feedback recv) * R = R_sample; * Else @@ -467,27 +481,23 @@ static void ccid3_hc_tx_packet_recv(stru if (hctx->ccid3hctx_state == TFRC_SSTATE_NO_FBACK) { /* * Larger Initial Windows [RFC 4342, sec. 5] - * We deviate in that we use `s' instead of `MSS'. */ - __u64 w_init = min(4 * hctx->ccid3hctx_s, - max(2 * hctx->ccid3hctx_s, 4380)); hctx->ccid3hctx_rtt = r_sample; - hctx->ccid3hctx_x = scaled_div(w_init << 6, r_sample); + hctx->ccid3hctx_x = rfc3390_initial_rate(sk); hctx->ccid3hctx_t_ld = now; - ccid3_update_send_time(hctx); + ccid3_update_send_interval(hctx); - ccid3_pr_debug("%s(%p), s=%u, w_init=%llu, " - "R_sample=%dus, X=%u\n", dccp_role(sk), + ccid3_pr_debug("%s(%p), s=%u, MSS=%u, " + "R_sample=%uus, X=%u\n", dccp_role(sk), sk, hctx->ccid3hctx_s, - (unsigned long long)w_init, - (int)r_sample, + dp->dccps_mss_cache, r_sample, (unsigned)(hctx->ccid3hctx_x >> 6)); ccid3_hc_tx_set_state(sk, TFRC_SSTATE_FBACK); } else { hctx->ccid3hctx_rtt = (9 * hctx->ccid3hctx_rtt + - (u32)r_sample) / 10; + r_sample) / 10; /* Update sending rate (step 4 of [RFC 3448, 4.3]) */ if (hctx->ccid3hctx_p > 0) @@ -497,10 +507,10 @@ static void ccid3_hc_tx_packet_recv(stru hctx->ccid3hctx_p); ccid3_hc_tx_update_x(sk, &now); - ccid3_pr_debug("%s(%p), RTT=%uus (sample=%dus), s=%u, " + ccid3_pr_debug("%s(%p), RTT=%uus (sample=%uus), s=%u, " "p=%u, X_calc=%u, X_recv=%u, X=%u\n", dccp_role(sk), - sk, hctx->ccid3hctx_rtt, (int)r_sample, + sk, hctx->ccid3hctx_rtt, r_sample, hctx->ccid3hctx_s, hctx->ccid3hctx_p, hctx->ccid3hctx_x_calc, (unsigned)(hctx->ccid3hctx_x_recv >> 6), @@ -644,10 +654,50 @@ static void ccid3_hc_tx_exit(struct sock dccp_tx_hist_purge(ccid3_tx_hist, &hctx->ccid3hctx_hist); } +static void ccid3_hc_tx_get_info(struct sock *sk, struct tcp_info *info) +{ + const struct ccid3_hc_tx_sock *hctx = ccid3_hc_tx_sk(sk); + + /* Listen socks doesn't have a private CCID block */ + if (sk->sk_state == DCCP_LISTEN) + return; + + BUG_ON(hctx == NULL); + + info->tcpi_rto = hctx->ccid3hctx_t_rto; + info->tcpi_rtt = hctx->ccid3hctx_rtt; +} + +static int ccid3_hc_tx_getsockopt(struct sock *sk, const int optname, int len, + u32 __user *optval, int __user *optlen) +{ + const struct ccid3_hc_tx_sock *hctx = ccid3_hc_tx_sk(sk); + const void *val; + + /* Listen socks doesn't have a private CCID block */ + if (sk->sk_state == DCCP_LISTEN) + return -EINVAL; + + switch (optname) { + case DCCP_SOCKOPT_CCID_TX_INFO: + if (len < sizeof(hctx->ccid3hctx_tfrc)) + return -EINVAL; + len = sizeof(hctx->ccid3hctx_tfrc); + val = &hctx->ccid3hctx_tfrc; + break; + default: + return -ENOPROTOOPT; + } + + if (put_user(len, optlen) || copy_to_user(optval, val, len)) + return -EFAULT; + + return 0; +} + /* - * RX Half Connection methods + * Receiver Half-Connection Routines */ - #ifdef CONFIG_IP_DCCP_CCID3_DEBUG static const char *ccid3_rx_state_name(enum ccid3_hc_rx_states state) { @@ -977,8 +1027,7 @@ static void ccid3_hc_rx_packet_recv(stru const struct dccp_options_received *opt_recv; struct dccp_rx_hist_entry *packet; struct timeval now; - u32 p_prev, rtt_prev; - suseconds_t r_sample, t_elapsed; + u32 p_prev, r_sample, rtt_prev; int loss, payload_size; BUG_ON(hcrx == NULL); @@ -994,17 +1043,7 @@ static void ccid3_hc_rx_packet_recv(stru break; rtt_prev = hcrx->ccid3hcrx_rtt; dccp_timestamp(sk, &now); - timeval_sub_usecs(&now, opt_recv->dccpor_timestamp_echo * 10); - r_sample = timeval_usecs(&now); - t_elapsed = opt_recv->dccpor_elapsed_time * 10; - - DCCP_BUG_ON(r_sample < 0); - if (unlikely(r_sample <= t_elapsed)) - DCCP_WARN("r_sample=%ldus, t_elapsed=%ldus\n", - (long)r_sample, (long)t_elapsed); - else - r_sample -= t_elapsed; - CCID3_RTT_SANITY_CHECK(r_sample); + r_sample = dccp_sample_rtt(sk, &now, NULL); if (hcrx->ccid3hcrx_state == TFRC_RSTATE_NO_DATA) hcrx->ccid3hcrx_rtt = r_sample; @@ -1132,20 +1171,6 @@ static void ccid3_hc_rx_get_info(struct info->tcpi_rcv_rtt = hcrx->ccid3hcrx_rtt; } -static void ccid3_hc_tx_get_info(struct sock *sk, struct tcp_info *info) -{ - const struct ccid3_hc_tx_sock *hctx = ccid3_hc_tx_sk(sk); - - /* Listen socks doesn't have a private CCID block */ - if (sk->sk_state == DCCP_LISTEN) - return; - - BUG_ON(hctx == NULL); - - info->tcpi_rto = hctx->ccid3hctx_t_rto; - info->tcpi_rtt = hctx->ccid3hctx_rtt; -} - static int ccid3_hc_rx_getsockopt(struct sock *sk, const int optname, int len, u32 __user *optval, int __user *optlen) { @@ -1173,33 +1198,6 @@ static int ccid3_hc_rx_getsockopt(struct return 0; } -static int ccid3_hc_tx_getsockopt(struct sock *sk, const int optname, int len, - u32 __user *optval, int __user *optlen) -{ - const struct ccid3_hc_tx_sock *hctx = ccid3_hc_tx_sk(sk); - const void *val; - - /* Listen socks doesn't have a private CCID block */ - if (sk->sk_state == DCCP_LISTEN) - return -EINVAL; - - switch (optname) { - case DCCP_SOCKOPT_CCID_TX_INFO: - if (len < sizeof(hctx->ccid3hctx_tfrc)) - return -EINVAL; - len = sizeof(hctx->ccid3hctx_tfrc); - val = &hctx->ccid3hctx_tfrc; - break; - default: - return -ENOPROTOOPT; - } - - if (put_user(len, optlen) || copy_to_user(optval, val, len)) - return -EFAULT; - - return 0; -} - static struct ccid_operations ccid3 = { .ccid_id = DCCPC_CCID3, .ccid_name = "ccid3", diff -puN net/dccp/ccids/ccid3.h~git-net net/dccp/ccids/ccid3.h --- a/net/dccp/ccids/ccid3.h~git-net +++ a/net/dccp/ccids/ccid3.h @@ -51,16 +51,6 @@ /* Parameter t_mbi from [RFC 3448, 4.3]: backoff interval in seconds */ #define TFRC_T_MBI 64 -/* What we think is a reasonable upper limit on RTT values */ -#define CCID3_SANE_RTT_MAX ((suseconds_t)(4 * USEC_PER_SEC)) - -#define CCID3_RTT_SANITY_CHECK(rtt) do { \ - if (rtt > CCID3_SANE_RTT_MAX) { \ - DCCP_CRIT("RTT (%d) too large, substituting %d", \ - (int)rtt, (int)CCID3_SANE_RTT_MAX); \ - rtt = CCID3_SANE_RTT_MAX; \ - } } while (0) - enum ccid3_options { TFRC_OPT_LOSS_EVENT_RATE = 192, TFRC_OPT_LOSS_INTERVALS = 193, diff -puN net/dccp/ccids/lib/loss_interval.c~git-net net/dccp/ccids/lib/loss_interval.c --- a/net/dccp/ccids/lib/loss_interval.c~git-net +++ a/net/dccp/ccids/lib/loss_interval.c @@ -91,7 +91,7 @@ u32 dccp_li_hist_calc_i_mean(struct list u32 w_tot = 0; list_for_each_entry_safe(li_entry, li_next, list, dccplih_node) { - if (li_entry->dccplih_interval != ~0) { + if (li_entry->dccplih_interval != ~0U) { i_tot0 += li_entry->dccplih_interval * dccp_li_hist_w[i]; w_tot += dccp_li_hist_w[i]; if (i != 0) diff -puN net/dccp/dccp.h~git-net net/dccp/dccp.h --- a/net/dccp/dccp.h~git-net +++ a/net/dccp/dccp.h @@ -31,13 +31,9 @@ __stringify(cond)); \ } while (0) -#ifdef MODULE #define DCCP_PRINTK(enable, fmt, args...) do { if (enable) \ printk(fmt, ##args); \ } while(0) -#else -#define DCCP_PRINTK(enable, fmt, args...) printk(fmt, ##args) -#endif #define DCCP_PR_DEBUG(enable, fmt, a...) DCCP_PRINTK(enable, KERN_DEBUG \ "%s: " fmt, __FUNCTION__, ##a) @@ -75,11 +71,15 @@ extern void dccp_time_wait(struct sock * /* RFC 1122, 4.2.3.1 initial RTO value */ #define DCCP_TIMEOUT_INIT ((unsigned)(3 * HZ)) +#define DCCP_RTO_MAX ((unsigned)(120 * HZ)) /* FIXME: using TCP value */ + +/* bounds for sampled RTT values from packet exchanges (in usec) */ +#define DCCP_SANE_RTT_MIN 100 +#define DCCP_SANE_RTT_MAX (4 * USEC_PER_SEC) + /* Maximal interval between probes for local resources. */ #define DCCP_RESOURCE_PROBE_INTERVAL ((unsigned)(HZ / 2U)) -#define DCCP_RTO_MAX ((unsigned)(120 * HZ)) /* FIXME: using TCP value */ - /* sysctl variables for DCCP */ extern int sysctl_dccp_request_retries; extern int sysctl_dccp_retries1; @@ -92,17 +92,43 @@ extern int sysctl_dccp_feat_send_ack_ve extern int sysctl_dccp_feat_send_ndp_count; extern int sysctl_dccp_tx_qlen; +/* + * 48-bit sequence number arithmetic (signed and unsigned) + */ +#define INT48_MIN 0x800000000000LL /* 2^47 */ +#define UINT48_MAX 0xFFFFFFFFFFFFLL /* 2^48 - 1 */ +#define COMPLEMENT48(x) (0x1000000000000LL - (x)) /* 2^48 - x */ +#define TO_SIGNED48(x) (((x) < INT48_MIN)? (x) : -COMPLEMENT48( (x))) +#define TO_UNSIGNED48(x) (((x) >= 0)? (x) : COMPLEMENT48(-(x))) +#define ADD48(a, b) (((a) + (b)) & UINT48_MAX) +#define SUB48(a, b) ADD48((a), COMPLEMENT48(b)) + +static inline void dccp_set_seqno(u64 *seqno, u64 value) +{ + *seqno = value & UINT48_MAX; +} + +static inline void dccp_inc_seqno(u64 *seqno) +{ + *seqno = ADD48(*seqno, 1); +} + +/* signed mod-2^48 distance: pos. if seqno1 < seqno2, neg. if seqno1 > seqno2 */ +static inline s64 dccp_delta_seqno(const u64 seqno1, const u64 seqno2) +{ + u64 delta = SUB48(seqno2, seqno1); + + return TO_SIGNED48(delta); +} + /* is seq1 < seq2 ? */ static inline int before48(const u64 seq1, const u64 seq2) { - return (s64)((seq1 << 16) - (seq2 << 16)) < 0; + return (s64)((seq2 << 16) - (seq1 << 16)) > 0; } /* is seq1 > seq2 ? */ -static inline int after48(const u64 seq1, const u64 seq2) -{ - return (s64)((seq2 << 16) - (seq1 << 16)) < 0; -} +#define after48(seq1, seq2) before48(seq2, seq1) /* is seq2 <= seq1 <= seq3 ? */ static inline int between48(const u64 seq1, const u64 seq2, const u64 seq3) @@ -118,9 +144,7 @@ static inline u64 max48(const u64 seq1, /* is seq1 next seqno after seq2 */ static inline int follows48(const u64 seq1, const u64 seq2) { - int diff = (seq1 & 0xFFFF) - (seq2 & 0xFFFF); - - return diff==1; + return dccp_delta_seqno(seq2, seq1) == 1; } enum { @@ -272,6 +296,8 @@ extern int dccp_v4_connect(struct soc extern int dccp_send_reset(struct sock *sk, enum dccp_reset_codes code); extern void dccp_send_close(struct sock *sk, const int active); extern int dccp_invalid_packet(struct sk_buff *skb); +extern u32 dccp_sample_rtt(struct sock *sk, struct timeval *t_recv, + struct timeval *t_history); static inline int dccp_bad_service_code(const struct sock *sk, const __be32 service) @@ -313,26 +339,7 @@ static inline int dccp_packet_without_ac return type == DCCP_PKT_DATA || type == DCCP_PKT_REQUEST; } -#define DCCP_MAX_SEQNO ((((u64)1) << 48) - 1) -#define DCCP_PKT_WITHOUT_ACK_SEQ (DCCP_MAX_SEQNO << 2) - -static inline void dccp_set_seqno(u64 *seqno, u64 value) -{ - if (value > DCCP_MAX_SEQNO) - value -= DCCP_MAX_SEQNO + 1; - *seqno = value; -} - -static inline u64 dccp_delta_seqno(u64 seqno1, u64 seqno2) -{ - return ((seqno2 << 16) - (seqno1 << 16)) >> 16; -} - -static inline void dccp_inc_seqno(u64 *seqno) -{ - if (++*seqno > DCCP_MAX_SEQNO) - *seqno = 0; -} +#define DCCP_PKT_WITHOUT_ACK_SEQ (UINT48_MAX << 2) static inline void dccp_hdr_set_seq(struct dccp_hdr *dh, const u64 gss) { diff -puN net/dccp/input.c~git-net net/dccp/input.c --- a/net/dccp/input.c~git-net +++ a/net/dccp/input.c @@ -86,7 +86,8 @@ static int dccp_check_seqno(struct sock dh->dccph_type == DCCP_PKT_SYNCACK) { if (between48(DCCP_SKB_CB(skb)->dccpd_ack_seq, dp->dccps_awl, dp->dccps_awh) && - !before48(DCCP_SKB_CB(skb)->dccpd_seq, dp->dccps_swl)) + dccp_delta_seqno(dp->dccps_swl, + DCCP_SKB_CB(skb)->dccpd_seq) >= 0) dccp_update_gsr(sk, DCCP_SKB_CB(skb)->dccpd_seq); else return -1; @@ -203,7 +204,8 @@ static int __dccp_rcv_established(struct if (dp->dccps_role != DCCP_ROLE_CLIENT) goto send_sync; check_seq: - if (!before48(DCCP_SKB_CB(skb)->dccpd_seq, dp->dccps_osr)) { + if (dccp_delta_seqno(dp->dccps_osr, + DCCP_SKB_CB(skb)->dccpd_seq) >= 0) { send_sync: dccp_send_sync(sk, DCCP_SKB_CB(skb)->dccpd_seq, DCCP_PKT_SYNC); @@ -298,6 +300,14 @@ static int dccp_rcv_request_sent_state_p if (dccp_parse_options(sk, skb)) goto out_invalid_packet; + /* Obtain RTT sample from SYN exchange (used by CCID 3) */ + if (dp->dccps_options_received.dccpor_timestamp_echo) { + struct timeval now; + + dccp_timestamp(sk, &now); + dp->dccps_syn_rtt = dccp_sample_rtt(sk, &now, NULL); + } + if (dccp_msk(sk)->dccpms_send_ack_vector && dccp_ackvec_add(dp->dccps_hc_rx_ackvec, sk, DCCP_SKB_CB(skb)->dccpd_seq, @@ -575,3 +585,43 @@ discard: } EXPORT_SYMBOL_GPL(dccp_rcv_state_process); + +/** + * dccp_sample_rtt - Sample RTT from packet exchange + * + * @sk: connected dccp_sock + * @t_recv: receive timestamp of packet with timestamp echo + * @t_hist: packet history timestamp or NULL + */ +u32 dccp_sample_rtt(struct sock *sk, struct timeval *t_recv, + struct timeval *t_hist) +{ + struct dccp_sock *dp = dccp_sk(sk); + struct dccp_options_received *or = &dp->dccps_options_received; + suseconds_t delta; + + if (t_hist == NULL) { + if (!or->dccpor_timestamp_echo) { + DCCP_WARN("packet without timestamp echo\n"); + return DCCP_SANE_RTT_MAX; + } + timeval_sub_usecs(t_recv, or->dccpor_timestamp_echo * 10); + delta = timeval_usecs(t_recv); + } else + delta = timeval_delta(t_recv, t_hist); + + delta -= or->dccpor_elapsed_time * 10; /* either set or 0 */ + + if (unlikely(delta <= 0)) { + DCCP_WARN("unusable RTT sample %ld, using min\n", (long)delta); + return DCCP_SANE_RTT_MIN; + } + if (unlikely(delta - (suseconds_t)DCCP_SANE_RTT_MAX > 0)) { + DCCP_WARN("RTT sample %ld too large, using max\n", (long)delta); + return DCCP_SANE_RTT_MAX; + } + + return delta; +} + +EXPORT_SYMBOL_GPL(dccp_sample_rtt); diff -puN net/dccp/ipv4.c~git-net net/dccp/ipv4.c --- a/net/dccp/ipv4.c~git-net +++ a/net/dccp/ipv4.c @@ -207,8 +207,8 @@ static void dccp_v4_err(struct sk_buff * (iph->ihl << 2)); struct dccp_sock *dp; struct inet_sock *inet; - const int type = skb->h.icmph->type; - const int code = skb->h.icmph->code; + const int type = icmp_hdr(skb)->type; + const int code = icmp_hdr(skb)->code; struct sock *sk; __u64 seq; int err; @@ -363,8 +363,8 @@ EXPORT_SYMBOL_GPL(dccp_v4_send_check); static inline u64 dccp_v4_init_sequence(const struct sk_buff *skb) { - return secure_dccp_sequence_number(skb->nh.iph->daddr, - skb->nh.iph->saddr, + return secure_dccp_sequence_number(ip_hdr(skb)->daddr, + ip_hdr(skb)->saddr, dccp_hdr(skb)->dccph_dport, dccp_hdr(skb)->dccph_sport); } @@ -405,7 +405,7 @@ struct sock *dccp_v4_request_recv_sock(s newinet->opt = ireq->opt; ireq->opt = NULL; newinet->mc_index = inet_iif(skb); - newinet->mc_ttl = skb->nh.iph->ttl; + newinet->mc_ttl = ip_hdr(skb)->ttl; newinet->id = jiffies; dccp_sync_mss(newsk, dst_mtu(dst)); @@ -428,7 +428,7 @@ EXPORT_SYMBOL_GPL(dccp_v4_request_recv_s static struct sock *dccp_v4_hnd_req(struct sock *sk, struct sk_buff *skb) { const struct dccp_hdr *dh = dccp_hdr(skb); - const struct iphdr *iph = skb->nh.iph; + const struct iphdr *iph = ip_hdr(skb); struct sock *nsk; struct request_sock **prev; /* Find possible connection requests. */ @@ -460,8 +460,8 @@ static struct dst_entry* dccp_v4_route_s struct rtable *rt; struct flowi fl = { .oif = ((struct rtable *)skb->dst)->rt_iif, .nl_u = { .ip4_u = - { .daddr = skb->nh.iph->saddr, - .saddr = skb->nh.iph->daddr, + { .daddr = ip_hdr(skb)->saddr, + .saddr = ip_hdr(skb)->daddr, .tos = RT_CONN_FLAGS(sk) } }, .proto = sk->sk_protocol, .uli_u = { .ports = @@ -513,6 +513,7 @@ static void dccp_v4_ctl_send_reset(struc { int err; struct dccp_hdr *rxdh = dccp_hdr(rxskb), *dh; + const struct iphdr *rxiph; const int dccp_hdr_reset_len = sizeof(struct dccp_hdr) + sizeof(struct dccp_hdr_ext) + sizeof(struct dccp_hdr_reset); @@ -559,13 +560,13 @@ static void dccp_v4_ctl_send_reset(struc dccp_hdr_set_ack(dccp_hdr_ack_bits(skb), DCCP_SKB_CB(rxskb)->dccpd_seq); dccp_csum_outgoing(skb); - dh->dccph_checksum = dccp_v4_csum_finish(skb, rxskb->nh.iph->saddr, - rxskb->nh.iph->daddr); + rxiph = ip_hdr(rxskb); + dh->dccph_checksum = dccp_v4_csum_finish(skb, rxiph->saddr, + rxiph->daddr); bh_lock_sock(dccp_v4_ctl_socket->sk); err = ip_build_and_send_pkt(skb, dccp_v4_ctl_socket->sk, - rxskb->nh.iph->daddr, - rxskb->nh.iph->saddr, NULL); + rxiph->daddr, rxiph->saddr, NULL); bh_unlock_sock(dccp_v4_ctl_socket->sk); if (net_xmit_eval(err) == 0) { @@ -640,8 +641,8 @@ int dccp_v4_conn_request(struct sock *sk goto drop_and_free; ireq = inet_rsk(req); - ireq->loc_addr = skb->nh.iph->daddr; - ireq->rmt_addr = skb->nh.iph->saddr; + ireq->loc_addr = ip_hdr(skb)->daddr; + ireq->rmt_addr = ip_hdr(skb)->saddr; ireq->opt = NULL; /* @@ -809,6 +810,7 @@ EXPORT_SYMBOL_GPL(dccp_invalid_packet); static int dccp_v4_rcv(struct sk_buff *skb) { const struct dccp_hdr *dh; + const struct iphdr *iph; struct sock *sk; int min_cov; @@ -817,8 +819,9 @@ static int dccp_v4_rcv(struct sk_buff *s if (dccp_invalid_packet(skb)) goto discard_it; + iph = ip_hdr(skb); /* Step 1: If header checksum is incorrect, drop packet and return */ - if (dccp_v4_csum_finish(skb, skb->nh.iph->saddr, skb->nh.iph->daddr)) { + if (dccp_v4_csum_finish(skb, iph->saddr, iph->daddr)) { DCCP_WARN("dropped packet with invalid checksum\n"); goto discard_it; } @@ -832,8 +835,8 @@ static int dccp_v4_rcv(struct sk_buff *s "src=%u.%u.%u.%u@%-5d " "dst=%u.%u.%u.%u@%-5d seq=%llu", dccp_packet_name(dh->dccph_type), - NIPQUAD(skb->nh.iph->saddr), ntohs(dh->dccph_sport), - NIPQUAD(skb->nh.iph->daddr), ntohs(dh->dccph_dport), + NIPQUAD(iph->saddr), ntohs(dh->dccph_sport), + NIPQUAD(iph->daddr), ntohs(dh->dccph_dport), (unsigned long long) DCCP_SKB_CB(skb)->dccpd_seq); if (dccp_packet_without_ack(skb)) { @@ -848,10 +851,8 @@ static int dccp_v4_rcv(struct sk_buff *s /* Step 2: * Look up flow ID in table and get corresponding socket */ sk = __inet_lookup(&dccp_hashinfo, - skb->nh.iph->saddr, dh->dccph_sport, - skb->nh.iph->daddr, dh->dccph_dport, - inet_iif(skb)); - + iph->saddr, dh->dccph_sport, + iph->daddr, dh->dccph_dport, inet_iif(skb)); /* * Step 2: * If no socket ... diff -puN net/dccp/ipv6.c~git-net net/dccp/ipv6.c --- a/net/dccp/ipv6.c~git-net +++ a/net/dccp/ipv6.c @@ -84,8 +84,8 @@ static inline __u32 secure_dccpv6_sequen static inline __u32 dccp_v6_init_sequence(struct sk_buff *skb) { - return secure_dccpv6_sequence_number(skb->nh.ipv6h->daddr.s6_addr32, - skb->nh.ipv6h->saddr.s6_addr32, + return secure_dccpv6_sequence_number(ipv6_hdr(skb)->daddr.s6_addr32, + ipv6_hdr(skb)->saddr.s6_addr32, dccp_hdr(skb)->dccph_dport, dccp_hdr(skb)->dccph_sport ); @@ -261,8 +261,8 @@ static int dccp_v6_send_response(struct if (rxopt->srcrt) opt = ipv6_invert_rthdr(sk, - (struct ipv6_rt_hdr *)(pktopts->nh.raw + - rxopt->srcrt)); + (struct ipv6_rt_hdr *)(skb_network_header(pktopts) + + rxopt->srcrt)); } if (opt != NULL && opt->srcrt != NULL) { @@ -313,6 +313,7 @@ static void dccp_v6_reqsk_destructor(str static void dccp_v6_ctl_send_reset(struct sock *sk, struct sk_buff *rxskb) { struct dccp_hdr *rxdh = dccp_hdr(rxskb), *dh; + struct ipv6hdr *rxip6h; const u32 dccp_hdr_reset_len = sizeof(struct dccp_hdr) + sizeof(struct dccp_hdr_ext) + sizeof(struct dccp_hdr_reset); @@ -352,12 +353,13 @@ static void dccp_v6_ctl_send_reset(struc dccp_hdr_set_ack(dccp_hdr_ack_bits(skb), DCCP_SKB_CB(rxskb)->dccpd_seq); dccp_csum_outgoing(skb); - dh->dccph_checksum = dccp_v6_csum_finish(skb, &rxskb->nh.ipv6h->saddr, - &rxskb->nh.ipv6h->daddr); + rxip6h = ipv6_hdr(rxskb); + dh->dccph_checksum = dccp_v6_csum_finish(skb, &rxip6h->saddr, + &rxip6h->daddr); memset(&fl, 0, sizeof(fl)); - ipv6_addr_copy(&fl.fl6_dst, &rxskb->nh.ipv6h->saddr); - ipv6_addr_copy(&fl.fl6_src, &rxskb->nh.ipv6h->daddr); + ipv6_addr_copy(&fl.fl6_dst, &rxip6h->saddr); + ipv6_addr_copy(&fl.fl6_src, &rxip6h->daddr); fl.proto = IPPROTO_DCCP; fl.oif = inet6_iif(rxskb); @@ -390,7 +392,7 @@ static struct request_sock_ops dccp6_req static struct sock *dccp_v6_hnd_req(struct sock *sk,struct sk_buff *skb) { const struct dccp_hdr *dh = dccp_hdr(skb); - const struct ipv6hdr *iph = skb->nh.ipv6h; + const struct ipv6hdr *iph = ipv6_hdr(skb); struct sock *nsk; struct request_sock **prev; /* Find possible connection requests. */ @@ -460,8 +462,8 @@ static int dccp_v6_conn_request(struct s goto drop_and_free; ireq6 = inet6_rsk(req); - ipv6_addr_copy(&ireq6->rmt_addr, &skb->nh.ipv6h->saddr); - ipv6_addr_copy(&ireq6->loc_addr, &skb->nh.ipv6h->daddr); + ipv6_addr_copy(&ireq6->rmt_addr, &ipv6_hdr(skb)->saddr); + ipv6_addr_copy(&ireq6->loc_addr, &ipv6_hdr(skb)->daddr); ireq6->pktopts = NULL; if (ipv6_opt_accepted(sk, skb) || @@ -546,7 +548,7 @@ static struct sock *dccp_v6_request_recv newnp->pktoptions = NULL; newnp->opt = NULL; newnp->mcast_oif = inet6_iif(skb); - newnp->mcast_hops = skb->nh.ipv6h->hop_limit; + newnp->mcast_hops = ipv6_hdr(skb)->hop_limit; /* * No need to charge this sock to the relevant IPv6 refcnt debug socks count @@ -573,8 +575,8 @@ static struct sock *dccp_v6_request_recv if (rxopt->srcrt) opt = ipv6_invert_rthdr(sk, - (struct ipv6_rt_hdr *)(ireq6->pktopts->nh.raw + - rxopt->srcrt)); + (struct ipv6_rt_hdr *)(skb_network_header(ireq6->pktopts) + + rxopt->srcrt)); } if (dst == NULL) { @@ -653,7 +655,7 @@ static struct sock *dccp_v6_request_recv } newnp->opt = NULL; newnp->mcast_oif = inet6_iif(skb); - newnp->mcast_hops = skb->nh.ipv6h->hop_limit; + newnp->mcast_hops = ipv6_hdr(skb)->hop_limit; /* * Clone native IPv6 options from listening socket (if any) @@ -826,8 +828,8 @@ static int dccp_v6_rcv(struct sk_buff ** goto discard_it; /* Step 1: If header checksum is incorrect, drop packet and return. */ - if (dccp_v6_csum_finish(skb, &skb->nh.ipv6h->saddr, - &skb->nh.ipv6h->daddr)) { + if (dccp_v6_csum_finish(skb, &ipv6_hdr(skb)->saddr, + &ipv6_hdr(skb)->daddr)) { DCCP_WARN("dropped packet with invalid checksum\n"); goto discard_it; } @@ -844,9 +846,9 @@ static int dccp_v6_rcv(struct sk_buff ** /* Step 2: * Look up flow ID in table and get corresponding socket */ - sk = __inet6_lookup(&dccp_hashinfo, &skb->nh.ipv6h->saddr, + sk = __inet6_lookup(&dccp_hashinfo, &ipv6_hdr(skb)->saddr, dh->dccph_sport, - &skb->nh.ipv6h->daddr, ntohs(dh->dccph_dport), + &ipv6_hdr(skb)->daddr, ntohs(dh->dccph_dport), inet6_iif(skb)); /* * Step 2: diff -puN net/dccp/options.c~git-net net/dccp/options.c --- a/net/dccp/options.c~git-net +++ a/net/dccp/options.c @@ -29,8 +29,6 @@ int sysctl_dccp_feat_ack_ratio = D int sysctl_dccp_feat_send_ack_vector = DCCPF_INITIAL_SEND_ACK_VECTOR; int sysctl_dccp_feat_send_ndp_count = DCCPF_INITIAL_SEND_NDP_COUNT; -EXPORT_SYMBOL_GPL(sysctl_dccp_feat_sequence_window); - void dccp_minisock_init(struct dccp_minisock *dmsk) { dmsk->dccpms_sequence_window = sysctl_dccp_feat_sequence_window; @@ -174,21 +172,25 @@ int dccp_parse_options(struct sock *sk, opt_recv->dccpor_timestamp_echo = ntohl(*(__be32 *)value); dccp_pr_debug("%s rx opt: TIMESTAMP_ECHO=%u, len=%d, " - "ackno=%llu, ", dccp_role(sk), + "ackno=%llu", dccp_role(sk), opt_recv->dccpor_timestamp_echo, len + 2, (unsigned long long) DCCP_SKB_CB(skb)->dccpd_ack_seq); - if (len == 4) + if (len == 4) { + dccp_pr_debug_cat("\n"); break; + } if (len == 6) elapsed_time = ntohs(*(__be16 *)(value + 4)); else elapsed_time = ntohl(*(__be32 *)(value + 4)); + dccp_pr_debug_cat(", ELAPSED_TIME=%d\n", elapsed_time); + /* Give precedence to the biggest ELAPSED_TIME */ if (elapsed_time > opt_recv->dccpor_elapsed_time) opt_recv->dccpor_elapsed_time = elapsed_time; @@ -565,6 +567,14 @@ int dccp_insert_options(struct sock *sk, dccp_insert_options_feat(sk, skb)) return -1; + /* + * Obtain RTT sample from Request/Response exchange. + * This is currently used in CCID 3 initialisation. + */ + if (DCCP_SKB_CB(skb)->dccpd_type == DCCP_PKT_REQUEST && + dccp_insert_option_timestamp(sk, skb)) + return -1; + /* XXX: insert other options when appropriate */ if (DCCP_SKB_CB(skb)->dccpd_opt_len != 0) { diff -puN net/dccp/output.c~git-net net/dccp/output.c --- a/net/dccp/output.c~git-net +++ a/net/dccp/output.c @@ -194,6 +194,7 @@ static int dccp_wait_for_ccid(struct soc rc = ccid_hc_tx_send_packet(dp->dccps_hc_tx_ccid, sk, skb); if (rc <= 0) break; + dccp_pr_debug("delayed send by %d msec\n", rc); delay = msecs_to_jiffies(rc); sk->sk_write_pending++; release_sock(sk); @@ -255,7 +256,7 @@ void dccp_write_xmit(struct sock *sk, in DCCP_BUG("err=%d after ccid_hc_tx_packet_sent", err); } else { - dccp_pr_debug("packet discarded\n"); + dccp_pr_debug("packet discarded due to err=%d\n", err); kfree_skb(skb); } } diff -puN net/dccp/probe.c~git-net net/dccp/probe.c --- a/net/dccp/probe.c~git-net +++ a/net/dccp/probe.c @@ -90,15 +90,18 @@ static int jdccp_sendmsg(struct kiocb *i if (port == 0 || ntohs(inet->dport) == port || ntohs(inet->sport) == port) { if (hctx) - printl("%d.%d.%d.%d:%u %d.%d.%d.%d:%u %d %d %d %d %d\n", - NIPQUAD(inet->saddr), ntohs(inet->sport), - NIPQUAD(inet->daddr), ntohs(inet->dport), size, - hctx->ccid3hctx_s, hctx->ccid3hctx_rtt, - hctx->ccid3hctx_p, hctx->ccid3hctx_t_ipi); + printl("%d.%d.%d.%d:%u %d.%d.%d.%d:%u %d %d %d %d %u " + "%llu %llu %d\n", + NIPQUAD(inet->saddr), ntohs(inet->sport), + NIPQUAD(inet->daddr), ntohs(inet->dport), size, + hctx->ccid3hctx_s, hctx->ccid3hctx_rtt, + hctx->ccid3hctx_p, hctx->ccid3hctx_x_calc, + hctx->ccid3hctx_x_recv >> 6, + hctx->ccid3hctx_x >> 6, hctx->ccid3hctx_t_ipi); else printl("%d.%d.%d.%d:%u %d.%d.%d.%d:%u %d\n", - NIPQUAD(inet->saddr), ntohs(inet->sport), - NIPQUAD(inet->daddr), ntohs(inet->dport), size); + NIPQUAD(inet->saddr), ntohs(inet->sport), + NIPQUAD(inet->daddr), ntohs(inet->dport), size); } jprobe_return(); diff -puN net/decnet/af_decnet.c~git-net net/decnet/af_decnet.c --- a/net/decnet/af_decnet.c~git-net +++ a/net/decnet/af_decnet.c @@ -2413,6 +2413,7 @@ module_init(decnet_init); static void __exit decnet_exit(void) { sock_unregister(AF_DECnet); + rtnl_unregister_all(PF_DECnet); dev_remove_pack(&dn_dix_packet_type); dn_unregister_sysctl(); diff -puN net/decnet/dn_dev.c~git-net net/decnet/dn_dev.c --- a/net/decnet/dn_dev.c~git-net +++ a/net/decnet/dn_dev.c @@ -799,7 +799,6 @@ static int dn_nl_dump_ifaddr(struct sk_b skip_ndevs = cb->args[0]; skip_naddr = cb->args[1]; - read_lock(&dev_base_lock); for (dev = dev_base, idx = 0; dev; dev = dev->next, idx++) { if (idx < skip_ndevs) continue; @@ -824,8 +823,6 @@ static int dn_nl_dump_ifaddr(struct sk_b } } done: - read_unlock(&dev_base_lock); - cb->args[0] = idx; cb->args[1] = dn_idx; @@ -913,7 +910,7 @@ static void dn_send_endnode_hello(struct pktlen = (__le16 *)skb_push(skb,2); *pktlen = dn_htons(skb->len - 2); - skb->nh.raw = skb->data; + skb_reset_network_header(skb); dn_rt_finish_output(skb, dn_rt_all_rt_mcast, msg->id); } @@ -1005,7 +1002,7 @@ static void dn_send_router_hello(struct pktlen = (__le16 *)skb_push(skb, 2); *pktlen = dn_htons(skb->len - 2); - skb->nh.raw = skb->data; + skb_reset_network_header(skb); if (dn_am_i_a_router(dn, dn_db, ifa)) { struct sk_buff *skb2 = skb_copy(skb, GFP_ATOMIC); @@ -1447,24 +1444,6 @@ static const struct file_operations dn_d #endif /* CONFIG_PROC_FS */ -static struct rtnetlink_link dnet_rtnetlink_table[RTM_NR_MSGTYPES] = -{ - [RTM_NEWADDR - RTM_BASE] = { .doit = dn_nl_newaddr, }, - [RTM_DELADDR - RTM_BASE] = { .doit = dn_nl_deladdr, }, - [RTM_GETADDR - RTM_BASE] = { .dumpit = dn_nl_dump_ifaddr, }, -#ifdef CONFIG_DECNET_ROUTER - [RTM_NEWROUTE - RTM_BASE] = { .doit = dn_fib_rtm_newroute, }, - [RTM_DELROUTE - RTM_BASE] = { .doit = dn_fib_rtm_delroute, }, - [RTM_GETROUTE - RTM_BASE] = { .doit = dn_cache_getroute, - .dumpit = dn_fib_dump, }, - [RTM_GETRULE - RTM_BASE] = { .dumpit = dn_fib_dump_rules, }, -#else - [RTM_GETROUTE - RTM_BASE] = { .doit = dn_cache_getroute, - .dumpit = dn_cache_dump, }, -#endif - -}; - static int __initdata addr[2]; module_param_array(addr, int, NULL, 0444); MODULE_PARM_DESC(addr, "The DECnet address of this machine: area,node"); @@ -1485,7 +1464,9 @@ void __init dn_dev_init(void) dn_dev_devices_on(); - rtnetlink_links[PF_DECnet] = dnet_rtnetlink_table; + rtnl_register(PF_DECnet, RTM_NEWADDR, dn_nl_newaddr, NULL); + rtnl_register(PF_DECnet, RTM_DELADDR, dn_nl_deladdr, NULL); + rtnl_register(PF_DECnet, RTM_GETADDR, NULL, dn_nl_dump_ifaddr); proc_net_fops_create("decnet_dev", S_IRUGO, &dn_dev_seq_fops); @@ -1500,8 +1481,6 @@ void __init dn_dev_init(void) void __exit dn_dev_cleanup(void) { - rtnetlink_links[PF_DECnet] = NULL; - #ifdef CONFIG_SYSCTL { int i; diff -puN net/decnet/dn_fib.c~git-net net/decnet/dn_fib.c --- a/net/decnet/dn_fib.c~git-net +++ a/net/decnet/dn_fib.c @@ -504,7 +504,7 @@ static int dn_fib_check_attr(struct rtms return 0; } -int dn_fib_rtm_delroute(struct sk_buff *skb, struct nlmsghdr *nlh, void *arg) +static int dn_fib_rtm_delroute(struct sk_buff *skb, struct nlmsghdr *nlh, void *arg) { struct dn_fib_table *tb; struct rtattr **rta = arg; @@ -520,7 +520,7 @@ int dn_fib_rtm_delroute(struct sk_buff * return -ESRCH; } -int dn_fib_rtm_newroute(struct sk_buff *skb, struct nlmsghdr *nlh, void *arg) +static int dn_fib_rtm_newroute(struct sk_buff *skb, struct nlmsghdr *nlh, void *arg) { struct dn_fib_table *tb; struct rtattr **rta = arg; @@ -748,11 +748,13 @@ void __exit dn_fib_cleanup(void) void __init dn_fib_init(void) { - dn_fib_table_init(); dn_fib_rules_init(); register_dnaddr_notifier(&dn_fib_dnaddr_notifier); + + rtnl_register(PF_DECnet, RTM_NEWROUTE, dn_fib_rtm_newroute, NULL); + rtnl_register(PF_DECnet, RTM_DELROUTE, dn_fib_rtm_delroute, NULL); } diff -puN net/decnet/dn_neigh.c~git-net net/decnet/dn_neigh.c --- a/net/decnet/dn_neigh.c~git-net +++ a/net/decnet/dn_neigh.c @@ -261,7 +261,7 @@ static int dn_long_output(struct sk_buff lp->s_class = 0; lp->pt = 0; - skb->nh.raw = skb->data; + skb_reset_network_header(skb); return NF_HOOK(PF_DECnet, NF_DN_POST_ROUTING, skb, NULL, neigh->dev, dn_neigh_output_packet); } @@ -300,7 +300,7 @@ static int dn_short_output(struct sk_buf sp->srcnode = cb->src; sp->forward = cb->hops & 0x3f; - skb->nh.raw = skb->data; + skb_reset_network_header(skb); return NF_HOOK(PF_DECnet, NF_DN_POST_ROUTING, skb, NULL, neigh->dev, dn_neigh_output_packet); } @@ -342,7 +342,7 @@ static int dn_phase3_output(struct sk_bu sp->srcnode = cb->src & dn_htons(0x03ff); sp->forward = cb->hops & 0x3f; - skb->nh.raw = skb->data; + skb_reset_network_header(skb); return NF_HOOK(PF_DECnet, NF_DN_POST_ROUTING, skb, NULL, neigh->dev, dn_neigh_output_packet); } diff -puN net/decnet/dn_nsp_in.c~git-net net/decnet/dn_nsp_in.c --- a/net/decnet/dn_nsp_in.c~git-net +++ a/net/decnet/dn_nsp_in.c @@ -362,7 +362,8 @@ static void dn_nsp_conn_conf(struct sock u16 dlen = *skb->data; if ((dlen <= 16) && (dlen <= skb->len)) { scp->conndata_in.opt_optl = dn_htons(dlen); - memcpy(scp->conndata_in.opt_data, skb->data + 1, dlen); + skb_copy_from_linear_data_offset(skb, 1, + scp->conndata_in.opt_data, dlen); } } dn_nsp_send_link(sk, DN_NOCHANGE, 0); @@ -406,7 +407,7 @@ static void dn_nsp_disc_init(struct sock u16 dlen = *skb->data; if ((dlen <= 16) && (dlen <= skb->len)) { scp->discdata_in.opt_optl = dn_htons(dlen); - memcpy(scp->discdata_in.opt_data, skb->data + 1, dlen); + skb_copy_from_linear_data_offset(skb, 1, scp->discdata_in.opt_data, dlen); } } @@ -725,7 +726,7 @@ static int dn_nsp_rx_packet(struct sk_bu if (!pskb_may_pull(skb, 2)) goto free_out; - skb->h.raw = skb->data; + skb_reset_transport_header(skb); cb->nsp_flags = *ptr++; if (decnet_debug_level & 2) diff -puN net/decnet/dn_nsp_out.c~git-net net/decnet/dn_nsp_out.c --- a/net/decnet/dn_nsp_out.c~git-net +++ a/net/decnet/dn_nsp_out.c @@ -79,7 +79,7 @@ static void dn_nsp_send(struct sk_buff * struct dst_entry *dst; struct flowi fl; - skb->h.raw = skb->data; + skb_reset_transport_header(skb); scp->stamp = jiffies; dst = sk_dst_check(sk, 0); @@ -681,8 +681,10 @@ void dn_nsp_send_conninit(struct sock *s if (scp->peer.sdn_objnum) type = 0; - skb_put(skb, dn_sockaddr2username(&scp->peer, skb->tail, type)); - skb_put(skb, dn_sockaddr2username(&scp->addr, skb->tail, 2)); + skb_put(skb, dn_sockaddr2username(&scp->peer, + skb_tail_pointer(skb), type)); + skb_put(skb, dn_sockaddr2username(&scp->addr, + skb_tail_pointer(skb), 2)); menuver = DN_MENUVER_ACC | DN_MENUVER_USR; if (scp->peer.sdn_flags & SDF_PROXY) diff -puN net/decnet/dn_route.c~git-net net/decnet/dn_route.c --- a/net/decnet/dn_route.c~git-net +++ a/net/decnet/dn_route.c @@ -77,6 +77,7 @@ #include #include #include +#include #include #include #include @@ -386,7 +387,7 @@ static int dn_return_short(struct sk_buf __le16 tmp; /* Add back headers */ - skb_push(skb, skb->data - skb->nh.raw); + skb_push(skb, skb->data - skb_network_header(skb)); if ((skb = skb_unshare(skb, GFP_ATOMIC)) == NULL) return NET_RX_DROP; @@ -425,7 +426,7 @@ static int dn_return_long(struct sk_buff unsigned char tmp[ETH_ALEN]; /* Add back all headers */ - skb_push(skb, skb->data - skb->nh.raw); + skb_push(skb, skb->data - skb_network_header(skb)); if ((skb = skb_unshare(skb, GFP_ATOMIC)) == NULL) return NET_RX_DROP; @@ -504,7 +505,7 @@ static int dn_route_rx_long(struct sk_bu goto drop_it; skb_pull(skb, 20); - skb->h.raw = skb->data; + skb_reset_transport_header(skb); /* Destination info */ ptr += 2; @@ -542,7 +543,7 @@ static int dn_route_rx_short(struct sk_b goto drop_it; skb_pull(skb, 5); - skb->h.raw = skb->data; + skb_reset_transport_header(skb); cb->dst = *(__le16 *)ptr; ptr += 2; @@ -615,7 +616,7 @@ int dn_route_rcv(struct sk_buff *skb, st flags = *skb->data; } - skb->nh.raw = skb->data; + skb_reset_network_header(skb); /* * Weed out future version DECnet @@ -1468,7 +1469,7 @@ static int dn_rt_fill_info(struct sk_buf struct dn_route *rt = (struct dn_route *)skb->dst; struct rtmsg *r; struct nlmsghdr *nlh; - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); long expires; nlh = NLMSG_NEW(skb, pid, seq, event, sizeof(*r), flags); @@ -1509,19 +1510,19 @@ static int dn_rt_fill_info(struct sk_buf if (rt->fl.iif) RTA_PUT(skb, RTA_IIF, sizeof(int), &rt->fl.iif); - nlh->nlmsg_len = skb->tail - b; + nlh->nlmsg_len = skb_tail_pointer(skb) - b; return skb->len; nlmsg_failure: rtattr_failure: - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); return -1; } /* * This is called by both endnodes and routers now. */ -int dn_cache_getroute(struct sk_buff *in_skb, struct nlmsghdr *nlh, void *arg) +static int dn_cache_getroute(struct sk_buff *in_skb, struct nlmsghdr *nlh, void *arg) { struct rtattr **rta = arg; struct rtmsg *rtm = NLMSG_DATA(nlh); @@ -1537,7 +1538,7 @@ int dn_cache_getroute(struct sk_buff *in skb = alloc_skb(NLMSG_GOODSIZE, GFP_KERNEL); if (skb == NULL) return -ENOBUFS; - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); cb = DN_SKB_CB(skb); if (rta[RTA_SRC-1]) @@ -1812,6 +1813,13 @@ void __init dn_route_init(void) dn_dst_ops.gc_thresh = (dn_rt_hash_mask + 1); proc_net_fops_create("decnet_cache", S_IRUGO, &dn_rt_cache_seq_fops); + +#ifdef CONFIG_DECNET_ROUTER + rtnl_register(PF_DECnet, RTM_GETROUTE, dn_cache_getroute, dn_fib_dump); +#else + rtnl_register(PF_DECnet, RTM_GETROUTE, dn_cache_getroute, + dn_cache_dump); +#endif } void __exit dn_route_cleanup(void) diff -puN net/decnet/dn_rules.c~git-net net/decnet/dn_rules.c --- a/net/decnet/dn_rules.c~git-net +++ a/net/decnet/dn_rules.c @@ -31,6 +31,7 @@ #include #include #include +#include static struct fib_rules_ops dn_fib_rules_ops; @@ -239,9 +240,9 @@ static u32 dn_fib_rule_default_pref(void return 0; } -int dn_fib_dump_rules(struct sk_buff *skb, struct netlink_callback *cb) +static void dn_fib_rule_flush_cache(void) { - return fib_rules_dump(skb, cb, AF_DECnet); + dn_rt_cache_flush(-1); } static struct fib_rules_ops dn_fib_rules_ops = { @@ -254,6 +255,7 @@ static struct fib_rules_ops dn_fib_rules .compare = dn_fib_rule_compare, .fill = dn_fib_rule_fill, .default_pref = dn_fib_rule_default_pref, + .flush_cache = dn_fib_rule_flush_cache, .nlgroup = RTNLGRP_DECnet_RULE, .policy = dn_fib_rule_policy, .rules_list = &dn_fib_rules, diff -puN net/decnet/dn_table.c~git-net net/decnet/dn_table.c --- a/net/decnet/dn_table.c~git-net +++ a/net/decnet/dn_table.c @@ -28,6 +28,7 @@ #include #include /* RTF_xxx */ #include +#include #include #include #include @@ -295,7 +296,7 @@ static int dn_fib_dump_info(struct sk_bu { struct rtmsg *rtm; struct nlmsghdr *nlh; - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); nlh = NLMSG_NEW(skb, pid, seq, event, sizeof(*rtm), flags); rtm = NLMSG_DATA(nlh); @@ -337,19 +338,19 @@ static int dn_fib_dump_info(struct sk_bu nhp->rtnh_ifindex = nh->nh_oif; if (nh->nh_gw) RTA_PUT(skb, RTA_GATEWAY, 2, &nh->nh_gw); - nhp->rtnh_len = skb->tail - (unsigned char *)nhp; + nhp->rtnh_len = skb_tail_pointer(skb) - (unsigned char *)nhp; } endfor_nexthops(fi); mp_head->rta_type = RTA_MULTIPATH; - mp_head->rta_len = skb->tail - (u8*)mp_head; + mp_head->rta_len = skb_tail_pointer(skb) - (u8 *)mp_head; } - nlh->nlmsg_len = skb->tail - b; + nlh->nlmsg_len = skb_tail_pointer(skb) - b; return skb->len; nlmsg_failure: rtattr_failure: - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); return -EMSGSIZE; } diff -puN net/decnet/netfilter/dn_rtmsg.c~git-net net/decnet/netfilter/dn_rtmsg.c --- a/net/decnet/netfilter/dn_rtmsg.c~git-net +++ a/net/decnet/netfilter/dn_rtmsg.c @@ -33,7 +33,7 @@ static struct sk_buff *dnrmg_build_messa { struct sk_buff *skb = NULL; size_t size; - unsigned char *old_tail; + sk_buff_data_t old_tail; struct nlmsghdr *nlh; unsigned char *ptr; struct nf_dn_rtmsg *rtm; @@ -48,7 +48,7 @@ static struct sk_buff *dnrmg_build_messa rtm = (struct nf_dn_rtmsg *)NLMSG_DATA(nlh); rtm->nfdn_ifindex = rt_skb->dev->ifindex; ptr = NFDN_RTMSG(rtm); - memcpy(ptr, rt_skb->data, rt_skb->len); + skb_copy_from_linear_data(rt_skb, ptr, rt_skb->len); nlh->nlmsg_len = skb->tail - old_tail; return skb; @@ -102,7 +102,7 @@ static unsigned int dnrmg_hook(unsigned static inline void dnrmg_receive_user_skb(struct sk_buff *skb) { - struct nlmsghdr *nlh = (struct nlmsghdr *)skb->data; + struct nlmsghdr *nlh = nlmsg_hdr(skb); if (nlh->nlmsg_len < sizeof(*nlh) || skb->len < nlh->nlmsg_len) return; @@ -138,7 +138,7 @@ static int __init dn_rtmsg_init(void) int rv = 0; dnrmg = netlink_kernel_create(NETLINK_DNRTMSG, DNRNG_NLGRP_MAX, - dnrmg_receive_user_sk, THIS_MODULE); + dnrmg_receive_user_sk, NULL, THIS_MODULE); if (dnrmg == NULL) { printk(KERN_ERR "dn_rtmsg: Cannot create netlink socket"); return -ENOMEM; diff -puN net/econet/af_econet.c~git-net net/econet/af_econet.c --- a/net/econet/af_econet.c~git-net +++ a/net/econet/af_econet.c @@ -162,7 +162,7 @@ static int econet_recvmsg(struct kiocb * err = memcpy_toiovec(msg->msg_iov, skb->data, copied); if (err) goto out_free; - skb_get_timestamp(skb, &sk->sk_stamp); + sk->sk_stamp = skb->tstamp; if (msg->msg_name) memcpy(msg->msg_name, skb->cb, msg->msg_namelen); @@ -345,7 +345,7 @@ static int econet_sendmsg(struct kiocb * goto out_unlock; skb_reserve(skb, LL_RESERVED_SPACE(dev)); - skb->nh.raw = skb->data; + skb_reset_network_header(skb); eb = (struct ec_cb *)&skb->cb; @@ -366,7 +366,7 @@ static int econet_sendmsg(struct kiocb * fh->cb = cb; fh->port = port; if (sock->type != SOCK_DGRAM) { - skb->tail = skb->data; + skb_reset_tail_pointer(skb); skb->len = 0; } else if (res < 0) goto out_free; @@ -727,6 +727,9 @@ static int econet_ioctl(struct socket *s case SIOCGSTAMP: return sock_get_timestamp(sk, argp); + case SIOCGSTAMPNS: + return sock_get_timestampns(sk, argp); + case SIOCSIFADDR: case SIOCGIFADDR: return ec_dev_ioctl(sock, cmd, argp); @@ -845,7 +848,7 @@ static void aun_send_response(__u32 addr static void aun_incoming(struct sk_buff *skb, struct aunhdr *ah, size_t len) { - struct iphdr *ip = skb->nh.iph; + struct iphdr *ip = ip_hdr(skb); unsigned char stn = ntohl(ip->saddr) & 0xff; struct sock *sk; struct sk_buff *newskb; @@ -940,10 +943,10 @@ static void aun_data_available(struct so printk(KERN_DEBUG "AUN: recvfrom() error %d\n", -err); } - data = skb->h.raw + sizeof(struct udphdr); + data = skb_transport_header(skb) + sizeof(struct udphdr); ah = (struct aunhdr *)data; len = skb->len - sizeof(struct udphdr); - ip = skb->nh.iph; + ip = ip_hdr(skb); switch (ah->code) { diff -puN net/ethernet/eth.c~git-net net/ethernet/eth.c --- a/net/ethernet/eth.c~git-net +++ a/net/ethernet/eth.c @@ -156,7 +156,8 @@ __be16 eth_type_trans(struct sk_buff *sk struct ethhdr *eth; unsigned char *rawp; - skb->mac.raw = skb->data; + skb->dev = dev; + skb_reset_mac_header(skb); skb_pull(skb, ETH_HLEN); eth = eth_hdr(skb); @@ -228,7 +229,7 @@ int eth_header_cache(struct neighbour *n eth = (struct ethhdr *) (((u8 *) hh->hh_data) + (HH_DATA_OFF(sizeof(*eth)))); - if (type == __constant_htons(ETH_P_802_3)) + if (type == htons(ETH_P_802_3)) return -1; eth->h_proto = type; diff -puN net/ieee80211/Kconfig~git-net net/ieee80211/Kconfig --- a/net/ieee80211/Kconfig~git-net +++ a/net/ieee80211/Kconfig @@ -56,7 +56,8 @@ config IEEE80211_CRYPT_CCMP config IEEE80211_CRYPT_TKIP tristate "IEEE 802.11i TKIP encryption" - depends on IEEE80211 && NET_RADIO + depends on IEEE80211 + select WIRELESS_EXT select CRYPTO select CRYPTO_MICHAEL_MIC select CRYPTO_ECB diff -puN net/ieee80211/ieee80211_crypt_wep.c~git-net net/ieee80211/ieee80211_crypt_wep.c --- a/net/ieee80211/ieee80211_crypt_wep.c~git-net +++ a/net/ieee80211/ieee80211_crypt_wep.c @@ -152,7 +152,7 @@ static int prism2_wep_encrypt(struct sk_ return -1; /* Copy the IV into the first 3 bytes of the key */ - memcpy(key, skb->data + hdr_len, 3); + skb_copy_from_linear_data_offset(skb, hdr_len, key, 3); /* Copy rest of the WEP key (the secret part) */ memcpy(key + 3, wep->key, wep->key_len); diff -puN net/ieee80211/ieee80211_rx.c~git-net net/ieee80211/ieee80211_rx.c --- a/net/ieee80211/ieee80211_rx.c~git-net +++ a/net/ieee80211/ieee80211_rx.c @@ -42,7 +42,7 @@ static void ieee80211_monitor_rx(struct u16 fc = le16_to_cpu(hdr->frame_ctl); skb->dev = ieee->dev; - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); skb_pull(skb, ieee80211_get_hdrlen(fc)); skb->pkt_type = PACKET_OTHERHOST; skb->protocol = __constant_htons(ETH_P_80211_RAW); @@ -606,12 +606,12 @@ int ieee80211_rx(struct ieee80211_device if (frag == 0) { /* copy first fragment (including full headers) into * beginning of the fragment cache skb */ - memcpy(skb_put(frag_skb, flen), skb->data, flen); + skb_copy_from_linear_data(skb, skb_put(frag_skb, flen), flen); } else { /* append frame payload to the end of the fragment * cache skb */ - memcpy(skb_put(frag_skb, flen), skb->data + hdrlen, - flen); + skb_copy_from_linear_data_offset(skb, hdrlen, + skb_put(frag_skb, flen), flen); } dev_kfree_skb_any(skb); skb = NULL; @@ -759,8 +759,9 @@ int ieee80211_rx(struct ieee80211_device IEEE80211_FCTL_TODS) && skb->len >= ETH_HLEN + ETH_ALEN) { /* Non-standard frame: get addr4 from its bogus location after * the payload */ - memcpy(skb->data + ETH_ALEN, - skb->data + skb->len - ETH_ALEN, ETH_ALEN); + skb_copy_to_linear_data_offset(skb, ETH_ALEN, + skb->data + skb->len - ETH_ALEN, + ETH_ALEN); skb_trim(skb, skb->len - ETH_ALEN); } #endif @@ -789,10 +790,11 @@ int ieee80211_rx(struct ieee80211_device if (skb2 != NULL) { /* send to wireless media */ - skb2->protocol = __constant_htons(ETH_P_802_3); - skb2->mac.raw = skb2->nh.raw = skb2->data; - /* skb2->nh.raw = skb2->data + ETH_HLEN; */ skb2->dev = dev; + skb2->protocol = __constant_htons(ETH_P_802_3); + skb_reset_mac_header(skb2); + skb_reset_network_header(skb2); + /* skb2->network_header += ETH_HLEN; */ dev_queue_xmit(skb2); } #endif @@ -800,7 +802,6 @@ int ieee80211_rx(struct ieee80211_device if (skb) { skb->protocol = eth_type_trans(skb, dev); memset(skb->cb, 0, sizeof(skb->cb)); - skb->dev = dev; skb->ip_summed = CHECKSUM_NONE; /* 802.11 crc not sufficient */ if (netif_rx(skb) == NET_RX_DROP) { /* netif_rx always succeeds, but it might drop diff -puN net/ieee80211/ieee80211_tx.c~git-net net/ieee80211/ieee80211_tx.c --- a/net/ieee80211/ieee80211_tx.c~git-net +++ a/net/ieee80211/ieee80211_tx.c @@ -225,10 +225,10 @@ static int ieee80211_classify(struct sk_ struct iphdr *ip; eth = (struct ethhdr *)skb->data; - if (eth->h_proto != __constant_htons(ETH_P_IP)) + if (eth->h_proto != htons(ETH_P_IP)) return 0; - ip = skb->nh.iph; + ip = ip_hdr(skb); switch (ip->tos & 0xfc) { case 0x20: return 2; @@ -309,8 +309,8 @@ int ieee80211_xmit(struct sk_buff *skb, } /* Save source and destination addresses */ - memcpy(dest, skb->data, ETH_ALEN); - memcpy(src, skb->data + ETH_ALEN, ETH_ALEN); + skb_copy_from_linear_data(skb, dest, ETH_ALEN); + skb_copy_from_linear_data_offset(skb, ETH_ALEN, src, ETH_ALEN); if (host_encrypt || host_build_iv) fc = IEEE80211_FTYPE_DATA | IEEE80211_STYPE_DATA | @@ -363,7 +363,7 @@ int ieee80211_xmit(struct sk_buff *skb, snapped = 1; ieee80211_copy_snap(skb_put(skb_new, SNAP_SIZE + sizeof(u16)), ether_type); - memcpy(skb_put(skb_new, skb->len), skb->data, skb->len); + skb_copy_from_linear_data(skb, skb_put(skb_new, skb->len), skb->len); res = crypt->ops->encrypt_msdu(skb_new, hdr_len, crypt->priv); if (res < 0) { IEEE80211_ERROR("msdu encryption failed\n"); @@ -492,7 +492,7 @@ int ieee80211_xmit(struct sk_buff *skb, bytes -= SNAP_SIZE + sizeof(u16); } - memcpy(skb_put(skb_frag, bytes), skb->data, bytes); + skb_copy_from_linear_data(skb, skb_put(skb_frag, bytes), bytes); /* Advance the SKB... */ skb_pull(skb, bytes); diff -puN net/ipv4/Kconfig~git-net net/ipv4/Kconfig --- a/net/ipv4/Kconfig~git-net +++ a/net/ipv4/Kconfig @@ -574,6 +574,33 @@ config TCP_CONG_VENO loss packets. See http://www.ntu.edu.sg/home5/ZHOU0022/papers/CPFu03a.pdf +config TCP_CONG_YEAH + tristate "YeAH TCP" + depends on EXPERIMENTAL + default n + ---help--- + YeAH-TCP is a sender-side high-speed enabled TCP congestion control + algorithm, which uses a mixed loss/delay approach to compute the + congestion window. It's design goals target high efficiency, + internal, RTT and Reno fairness, resilience to link loss while + keeping network elements load as low as possible. + + For further details look here: + http://wil.cs.caltech.edu/pfldnet2007/paper/YeAH_TCP.pdf + +config TCP_CONG_ILLINOIS + tristate "TCP Illinois" + depends on EXPERIMENTAL + default n + ---help--- + TCP-Illinois is a sender-side modificatio of TCP Reno for + high speed long delay links. It uses round-trip-time to + adjust the alpha and beta parameters to achieve a higher average + throughput and maintain fairness. + + For further details see: + http://www.ews.uiuc.edu/~shaoliu/tcpillinois/index.html + choice prompt "Default TCP congestion control" default DEFAULT_CUBIC diff -puN net/ipv4/Makefile~git-net net/ipv4/Makefile --- a/net/ipv4/Makefile~git-net +++ a/net/ipv4/Makefile @@ -49,6 +49,8 @@ obj-$(CONFIG_TCP_CONG_VEGAS) += tcp_vega obj-$(CONFIG_TCP_CONG_VENO) += tcp_veno.o obj-$(CONFIG_TCP_CONG_SCALABLE) += tcp_scalable.o obj-$(CONFIG_TCP_CONG_LP) += tcp_lp.o +obj-$(CONFIG_TCP_CONG_YEAH) += tcp_yeah.o +obj-$(CONFIG_TCP_CONG_ILLINOIS) += tcp_illinois.o obj-$(CONFIG_NETLABEL) += cipso_ipv4.o obj-$(CONFIG_XFRM) += xfrm4_policy.o xfrm4_state.o xfrm4_input.o \ diff -puN net/ipv4/af_inet.c~git-net net/ipv4/af_inet.c --- a/net/ipv4/af_inet.c~git-net +++ a/net/ipv4/af_inet.c @@ -87,6 +87,7 @@ #include #include #include +#include #include #include @@ -217,6 +218,26 @@ out: return err; } +u32 inet_ehash_secret __read_mostly; +EXPORT_SYMBOL(inet_ehash_secret); + +/* + * inet_ehash_secret must be set exactly once + * Instead of using a dedicated spinlock, we (ab)use inetsw_lock + */ +void build_ehash_secret(void) +{ + u32 rnd; + do { + get_random_bytes(&rnd, sizeof(rnd)); + } while (rnd == 0); + spin_lock_bh(&inetsw_lock); + if (!inet_ehash_secret) + inet_ehash_secret = rnd; + spin_unlock_bh(&inetsw_lock); +} +EXPORT_SYMBOL(build_ehash_secret); + /* * Create an inet socket. */ @@ -233,6 +254,11 @@ static int inet_create(struct socket *so int try_loading_module = 0; int err; + if (sock->type != SOCK_RAW && + sock->type != SOCK_DGRAM && + !inet_ehash_secret) + build_ehash_secret(); + sock->state = SS_UNCONNECTED; /* Look for the requested type/protocol pair. */ @@ -755,6 +781,9 @@ int inet_ioctl(struct socket *sock, unsi case SIOCGSTAMP: err = sock_get_timestamp(sk, (struct timeval __user *)arg); break; + case SIOCGSTAMPNS: + err = sock_get_timestampns(sk, (struct timespec __user *)arg); + break; case SIOCADDRT: case SIOCDELRT: case SIOCRTMSG: @@ -1109,7 +1138,7 @@ static int inet_gso_send_check(struct sk if (unlikely(!pskb_may_pull(skb, sizeof(*iph)))) goto out; - iph = skb->nh.iph; + iph = ip_hdr(skb); ihl = iph->ihl * 4; if (ihl < sizeof(*iph)) goto out; @@ -1117,8 +1146,9 @@ static int inet_gso_send_check(struct sk if (unlikely(!pskb_may_pull(skb, ihl))) goto out; - skb->h.raw = __skb_pull(skb, ihl); - iph = skb->nh.iph; + __skb_pull(skb, ihl); + skb_reset_transport_header(skb); + iph = ip_hdr(skb); proto = iph->protocol & (MAX_INET_PROTOS - 1); err = -EPROTONOSUPPORT; @@ -1152,7 +1182,7 @@ static struct sk_buff *inet_gso_segment( if (unlikely(!pskb_may_pull(skb, sizeof(*iph)))) goto out; - iph = skb->nh.iph; + iph = ip_hdr(skb); ihl = iph->ihl * 4; if (ihl < sizeof(*iph)) goto out; @@ -1160,8 +1190,9 @@ static struct sk_buff *inet_gso_segment( if (unlikely(!pskb_may_pull(skb, ihl))) goto out; - skb->h.raw = __skb_pull(skb, ihl); - iph = skb->nh.iph; + __skb_pull(skb, ihl); + skb_reset_transport_header(skb); + iph = ip_hdr(skb); id = ntohs(iph->id); proto = iph->protocol & (MAX_INET_PROTOS - 1); segs = ERR_PTR(-EPROTONOSUPPORT); @@ -1177,11 +1208,11 @@ static struct sk_buff *inet_gso_segment( skb = segs; do { - iph = skb->nh.iph; + iph = ip_hdr(skb); iph->id = htons(id++); iph->tot_len = htons(skb->len - skb->mac_len); iph->check = 0; - iph->check = ip_fast_csum(skb->nh.raw, iph->ihl); + iph->check = ip_fast_csum(skb_network_header(skb), iph->ihl); } while ((skb = skb->next)); out: @@ -1214,28 +1245,47 @@ static struct net_protocol icmp_protocol static int __init init_ipv4_mibs(void) { - net_statistics[0] = alloc_percpu(struct linux_mib); - net_statistics[1] = alloc_percpu(struct linux_mib); - ip_statistics[0] = alloc_percpu(struct ipstats_mib); - ip_statistics[1] = alloc_percpu(struct ipstats_mib); - icmp_statistics[0] = alloc_percpu(struct icmp_mib); - icmp_statistics[1] = alloc_percpu(struct icmp_mib); - tcp_statistics[0] = alloc_percpu(struct tcp_mib); - tcp_statistics[1] = alloc_percpu(struct tcp_mib); - udp_statistics[0] = alloc_percpu(struct udp_mib); - udp_statistics[1] = alloc_percpu(struct udp_mib); - udplite_statistics[0] = alloc_percpu(struct udp_mib); - udplite_statistics[1] = alloc_percpu(struct udp_mib); - if (! - (net_statistics[0] && net_statistics[1] && ip_statistics[0] - && ip_statistics[1] && tcp_statistics[0] && tcp_statistics[1] - && udp_statistics[0] && udp_statistics[1] - && udplite_statistics[0] && udplite_statistics[1] ) ) - return -ENOMEM; + if (snmp_mib_init((void **)net_statistics, + sizeof(struct linux_mib), + __alignof__(struct linux_mib)) < 0) + goto err_net_mib; + if (snmp_mib_init((void **)ip_statistics, + sizeof(struct ipstats_mib), + __alignof__(struct ipstats_mib)) < 0) + goto err_ip_mib; + if (snmp_mib_init((void **)icmp_statistics, + sizeof(struct icmp_mib), + __alignof__(struct icmp_mib)) < 0) + goto err_icmp_mib; + if (snmp_mib_init((void **)tcp_statistics, + sizeof(struct tcp_mib), + __alignof__(struct tcp_mib)) < 0) + goto err_tcp_mib; + if (snmp_mib_init((void **)udp_statistics, + sizeof(struct udp_mib), + __alignof__(struct udp_mib)) < 0) + goto err_udp_mib; + if (snmp_mib_init((void **)udplite_statistics, + sizeof(struct udp_mib), + __alignof__(struct udp_mib)) < 0) + goto err_udplite_mib; - (void) tcp_mib_init(); + tcp_mib_init(); return 0; + +err_udplite_mib: + snmp_mib_free((void **)udp_statistics); +err_udp_mib: + snmp_mib_free((void **)tcp_statistics); +err_tcp_mib: + snmp_mib_free((void **)icmp_statistics); +err_icmp_mib: + snmp_mib_free((void **)ip_statistics); +err_ip_mib: + snmp_mib_free((void **)net_statistics); +err_net_mib: + return -ENOMEM; } static int ipv4_proc_init(void); @@ -1336,7 +1386,7 @@ static int __init inet_init(void) * Initialise per-cpu ipv4 mibs */ - if(init_ipv4_mibs()) + if (init_ipv4_mibs()) printk(KERN_CRIT "inet_init: Cannot init ipv4 mibs\n"); ; ipv4_proc_init(); diff -puN net/ipv4/ah4.c~git-net net/ipv4/ah4.c --- a/net/ipv4/ah4.c~git-net +++ a/net/ipv4/ah4.c @@ -65,7 +65,7 @@ static int ah_output(struct xfrm_state * char buf[60]; } tmp_iph; - top_iph = skb->nh.iph; + top_iph = ip_hdr(skb); iph = &tmp_iph.iph; iph->tos = top_iph->tos; @@ -152,9 +152,9 @@ static int ah_input(struct xfrm_state *x skb->ip_summed = CHECKSUM_NONE; ah = (struct ip_auth_hdr*)skb->data; - iph = skb->nh.iph; + iph = ip_hdr(skb); - ihl = skb->data - skb->nh.raw; + ihl = skb->data - skb_network_header(skb); memcpy(work_buf, iph, ihl); iph->ttl = 0; @@ -181,7 +181,9 @@ static int ah_input(struct xfrm_state *x } } ((struct iphdr*)work_buf)->protocol = ah->nexthdr; - skb->h.raw = memcpy(skb->nh.raw += ah_hlen, work_buf, ihl); + skb->network_header += ah_hlen; + memcpy(skb_network_header(skb), work_buf, ihl); + skb->transport_header = skb->network_header; __skb_pull(skb, ah_hlen + ihl); return 0; @@ -196,8 +198,8 @@ static void ah4_err(struct sk_buff *skb, struct ip_auth_hdr *ah = (struct ip_auth_hdr*)(skb->data+(iph->ihl<<2)); struct xfrm_state *x; - if (skb->h.icmph->type != ICMP_DEST_UNREACH || - skb->h.icmph->code != ICMP_FRAG_NEEDED) + if (icmp_hdr(skb)->type != ICMP_DEST_UNREACH || + icmp_hdr(skb)->code != ICMP_FRAG_NEEDED) return; x = xfrm_state_lookup((xfrm_address_t *)&iph->daddr, ah->spi, IPPROTO_AH, AF_INET); diff -puN net/ipv4/arp.c~git-net net/ipv4/arp.c --- a/net/ipv4/arp.c~git-net +++ a/net/ipv4/arp.c @@ -342,13 +342,13 @@ static void arp_solicit(struct neighbour switch (IN_DEV_ARP_ANNOUNCE(in_dev)) { default: case 0: /* By default announce any local IP */ - if (skb && inet_addr_type(skb->nh.iph->saddr) == RTN_LOCAL) - saddr = skb->nh.iph->saddr; + if (skb && inet_addr_type(ip_hdr(skb)->saddr) == RTN_LOCAL) + saddr = ip_hdr(skb)->saddr; break; case 1: /* Restrict announcements of saddr in same subnet */ if (!skb) break; - saddr = skb->nh.iph->saddr; + saddr = ip_hdr(skb)->saddr; if (inet_addr_type(saddr) == RTN_LOCAL) { /* saddr should be known to target */ if (inet_addr_onlink(in_dev, target, saddr)) @@ -578,7 +578,7 @@ struct sk_buff *arp_create(int type, int return NULL; skb_reserve(skb, LL_RESERVED_SPACE(dev)); - skb->nh.raw = skb->data; + skb_reset_network_header(skb); arp = (struct arphdr *) skb_put(skb,sizeof(struct arphdr) + 2*(dev->addr_len+4)); skb->dev = dev; skb->protocol = htons(ETH_P_ARP); @@ -721,7 +721,7 @@ static int arp_process(struct sk_buff *s if (in_dev == NULL) goto out; - arp = skb->nh.arph; + arp = arp_hdr(skb); switch (dev_type) { default: @@ -937,7 +937,7 @@ static int arp_rcv(struct sk_buff *skb, (2 * sizeof(u32))))) goto freeskb; - arp = skb->nh.arph; + arp = arp_hdr(skb); if (arp->ar_hln != dev->addr_len || dev->flags & IFF_NOARP || skb->pkt_type == PACKET_OTHERHOST || @@ -1178,7 +1178,7 @@ int arp_ioctl(unsigned int cmd, void __u goto out; } - switch(cmd) { + switch (cmd) { case SIOCDARP: err = arp_req_delete(&r, dev); break; @@ -1360,7 +1360,7 @@ static void *arp_seq_start(struct seq_fi /* ------------------------------------------------------------------------ */ -static struct seq_operations arp_seq_ops = { +static const struct seq_operations arp_seq_ops = { .start = arp_seq_start, .next = neigh_seq_next, .stop = neigh_seq_stop, diff -puN net/ipv4/cipso_ipv4.c~git-net net/ipv4/cipso_ipv4.c --- a/net/ipv4/cipso_ipv4.c~git-net +++ a/net/ipv4/cipso_ipv4.c @@ -1174,7 +1174,7 @@ static int cipso_v4_map_cat_rng_ntoh(con u16 cat_low; u16 cat_high; - for(net_iter = 0; net_iter < net_cat_len; net_iter += 4) { + for (net_iter = 0; net_iter < net_cat_len; net_iter += 4) { cat_high = ntohs(*((__be16 *)&net_cat[net_iter])); if ((net_iter + 4) <= net_cat_len) cat_low = ntohs(*((__be16 *)&net_cat[net_iter + 2])); @@ -1676,7 +1676,7 @@ validate_return: */ void cipso_v4_error(struct sk_buff *skb, int error, u32 gateway) { - if (skb->nh.iph->protocol == IPPROTO_ICMP || error != -EACCES) + if (ip_hdr(skb)->protocol == IPPROTO_ICMP || error != -EACCES) return; if (gateway) diff -puN net/ipv4/devinet.c~git-net net/ipv4/devinet.c --- a/net/ipv4/devinet.c~git-net +++ a/net/ipv4/devinet.c @@ -48,7 +48,6 @@ #include #include #include -#include #include #include #include @@ -62,7 +61,7 @@ #include #include #include -#include +#include struct ipv4_devconf ipv4_devconf = { .accept_redirects = 1, @@ -633,7 +632,7 @@ int devinet_ioctl(unsigned int cmd, void dev_load(ifr.ifr_name); #endif - switch(cmd) { + switch (cmd) { case SIOCGIFADDR: /* Get interface address */ case SIOCGIFBRDADDR: /* Get the broadcast address */ case SIOCGIFDSTADDR: /* Get the destination address */ @@ -708,7 +707,7 @@ int devinet_ioctl(unsigned int cmd, void if (!ifa && cmd != SIOCSIFADDR && cmd != SIOCSIFFLAGS) goto done; - switch(cmd) { + switch (cmd) { case SIOCGIFADDR: /* Get interface address */ sin->sin_addr.s_addr = ifa->ifa_local; goto rarok; @@ -1183,17 +1182,13 @@ static int inet_dump_ifaddr(struct sk_bu int s_ip_idx, s_idx = cb->args[0]; s_ip_idx = ip_idx = cb->args[1]; - read_lock(&dev_base_lock); for (dev = dev_base, idx = 0; dev; dev = dev->next, idx++) { if (idx < s_idx) continue; if (idx > s_idx) s_ip_idx = 0; - rcu_read_lock(); - if ((in_dev = __in_dev_get_rcu(dev)) == NULL) { - rcu_read_unlock(); + if ((in_dev = __in_dev_get_rtnl(dev)) == NULL) continue; - } for (ifa = in_dev->ifa_list, ip_idx = 0; ifa; ifa = ifa->ifa_next, ip_idx++) { @@ -1201,16 +1196,12 @@ static int inet_dump_ifaddr(struct sk_bu continue; if (inet_fill_ifaddr(skb, ifa, NETLINK_CB(cb->skb).pid, cb->nlh->nlmsg_seq, - RTM_NEWADDR, NLM_F_MULTI) <= 0) { - rcu_read_unlock(); + RTM_NEWADDR, NLM_F_MULTI) <= 0) goto done; - } } - rcu_read_unlock(); } done: - read_unlock(&dev_base_lock); cb->args[0] = idx; cb->args[1] = ip_idx; @@ -1241,19 +1232,6 @@ errout: rtnl_set_sk_err(RTNLGRP_IPV4_IFADDR, err); } -static struct rtnetlink_link inet_rtnetlink_table[RTM_NR_MSGTYPES] = { - [RTM_NEWADDR - RTM_BASE] = { .doit = inet_rtm_newaddr, }, - [RTM_DELADDR - RTM_BASE] = { .doit = inet_rtm_deladdr, }, - [RTM_GETADDR - RTM_BASE] = { .dumpit = inet_dump_ifaddr, }, - [RTM_NEWROUTE - RTM_BASE] = { .doit = inet_rtm_newroute, }, - [RTM_DELROUTE - RTM_BASE] = { .doit = inet_rtm_delroute, }, - [RTM_GETROUTE - RTM_BASE] = { .doit = inet_rtm_getroute, - .dumpit = inet_dump_fib, }, -#ifdef CONFIG_IP_MULTIPLE_TABLES - [RTM_GETRULE - RTM_BASE] = { .dumpit = fib4_rules_dump, }, -#endif -}; - #ifdef CONFIG_SYSCTL void inet_forward_change(void) @@ -1636,7 +1614,10 @@ void __init devinet_init(void) { register_gifconf(PF_INET, inet_gifconf); register_netdevice_notifier(&ip_netdev_notifier); - rtnetlink_links[PF_INET] = inet_rtnetlink_table; + + rtnl_register(PF_INET, RTM_NEWADDR, inet_rtm_newaddr, NULL); + rtnl_register(PF_INET, RTM_DELADDR, inet_rtm_deladdr, NULL); + rtnl_register(PF_INET, RTM_GETADDR, NULL, inet_dump_ifaddr); #ifdef CONFIG_SYSCTL devinet_sysctl.sysctl_header = register_sysctl_table(devinet_sysctl.devinet_root_dir); diff -puN net/ipv4/esp4.c~git-net net/ipv4/esp4.c --- a/net/ipv4/esp4.c~git-net +++ a/net/ipv4/esp4.c @@ -21,13 +21,14 @@ static int esp_output(struct xfrm_state struct blkcipher_desc desc; struct esp_data *esp; struct sk_buff *trailer; + u8 *tail; int blksize; int clen; int alen; int nfrags; /* Strip IP+ESP header. */ - __skb_pull(skb, skb->h.raw - skb->data); + __skb_pull(skb, skb_transport_offset(skb)); /* Now skb is pure payload to encrypt */ err = -ENOMEM; @@ -49,19 +50,21 @@ static int esp_output(struct xfrm_state goto error; /* Fill padding... */ + tail = skb_tail_pointer(trailer); do { int i; for (i=0; ilen - 2; i++) - *(u8*)(trailer->tail + i) = i+1; + tail[i] = i + 1; } while (0); - *(u8*)(trailer->tail + clen-skb->len - 2) = (clen - skb->len)-2; + tail[clen - skb->len - 2] = (clen - skb->len) - 2; pskb_put(skb, trailer, clen - skb->len); - __skb_push(skb, skb->data - skb->nh.raw); - top_iph = skb->nh.iph; - esph = (struct ip_esp_hdr *)(skb->nh.raw + top_iph->ihl*4); + __skb_push(skb, skb->data - skb_network_header(skb)); + top_iph = ip_hdr(skb); + esph = (struct ip_esp_hdr *)(skb_network_header(skb) + + top_iph->ihl * 4); top_iph->tot_len = htons(skb->len + alen); - *(u8*)(trailer->tail - 1) = top_iph->protocol; + *(skb_tail_pointer(trailer) - 1) = top_iph->protocol; /* this is non-NULL only with UDP Encapsulation */ if (x->encap) { @@ -217,12 +220,12 @@ static int esp_input(struct xfrm_state * /* ... check padding bits here. Silly. :-) */ - iph = skb->nh.iph; + iph = ip_hdr(skb); ihl = iph->ihl * 4; if (x->encap) { struct xfrm_encap_tmpl *encap = x->encap; - struct udphdr *uh = (void *)(skb->nh.raw + ihl); + struct udphdr *uh = (void *)(skb_network_header(skb) + ihl); /* * 1) if the NAT-T peer's IP or port changed then @@ -260,7 +263,8 @@ static int esp_input(struct xfrm_state * iph->protocol = nexthdr[1]; pskb_trim(skb, skb->len - alen - padlen - 2); - skb->h.raw = __skb_pull(skb, sizeof(*esph) + esp->conf.ivlen) - ihl; + __skb_pull(skb, sizeof(*esph) + esp->conf.ivlen); + skb_set_transport_header(skb, -ihl); return 0; @@ -268,32 +272,33 @@ out: return -EINVAL; } -static u32 esp4_get_max_size(struct xfrm_state *x, int mtu) +static u32 esp4_get_mtu(struct xfrm_state *x, int mtu) { struct esp_data *esp = x->data; u32 blksize = ALIGN(crypto_blkcipher_blocksize(esp->conf.tfm), 4); - int enclen = 0; + u32 align = max_t(u32, blksize, esp->conf.padlen); + u32 rem; + + mtu -= x->props.header_len + esp->auth.icv_trunc_len; + rem = mtu & (align - 1); + mtu &= ~(align - 1); switch (x->props.mode) { case XFRM_MODE_TUNNEL: - mtu = ALIGN(mtu +2, blksize); break; default: case XFRM_MODE_TRANSPORT: /* The worst case */ - mtu = ALIGN(mtu + 2, 4) + blksize - 4; + mtu -= blksize - 4; + mtu += min_t(u32, blksize - 4, rem); break; case XFRM_MODE_BEET: /* The worst case. */ - enclen = IPV4_BEET_PHMAXLEN; - mtu = ALIGN(mtu + enclen + 2, blksize); + mtu += min_t(u32, IPV4_BEET_PHMAXLEN, rem); break; } - if (esp->conf.padlen) - mtu = ALIGN(mtu, esp->conf.padlen); - - return mtu + x->props.header_len + esp->auth.icv_trunc_len - enclen; + return mtu - 2; } static void esp4_err(struct sk_buff *skb, u32 info) @@ -302,8 +307,8 @@ static void esp4_err(struct sk_buff *skb struct ip_esp_hdr *esph = (struct ip_esp_hdr*)(skb->data+(iph->ihl<<2)); struct xfrm_state *x; - if (skb->h.icmph->type != ICMP_DEST_UNREACH || - skb->h.icmph->code != ICMP_FRAG_NEEDED) + if (icmp_hdr(skb)->type != ICMP_DEST_UNREACH || + icmp_hdr(skb)->code != ICMP_FRAG_NEEDED) return; x = xfrm_state_lookup((xfrm_address_t *)&iph->daddr, esph->spi, IPPROTO_ESP, AF_INET); @@ -336,6 +341,7 @@ static int esp_init_state(struct xfrm_st { struct esp_data *esp = NULL; struct crypto_blkcipher *tfm; + u32 align; /* null auth and encryption can have zero length keys */ if (x->aalg) { @@ -402,6 +408,8 @@ static int esp_init_state(struct xfrm_st x->props.header_len = sizeof(struct ip_esp_hdr) + esp->conf.ivlen; if (x->props.mode == XFRM_MODE_TUNNEL) x->props.header_len += sizeof(struct iphdr); + else if (x->props.mode == XFRM_MODE_BEET) + x->props.header_len += IPV4_BEET_PHMAXLEN; if (x->encap) { struct xfrm_encap_tmpl *encap = x->encap; @@ -417,7 +425,10 @@ static int esp_init_state(struct xfrm_st } } x->data = esp; - x->props.trailer_len = esp4_get_max_size(x, 0) - x->props.header_len; + align = ALIGN(crypto_blkcipher_blocksize(esp->conf.tfm), 4); + if (esp->conf.padlen) + align = max_t(u32, align, esp->conf.padlen); + x->props.trailer_len = align + 1 + esp->auth.icv_trunc_len; return 0; error: @@ -434,7 +445,7 @@ static struct xfrm_type esp_type = .proto = IPPROTO_ESP, .init_state = esp_init_state, .destructor = esp_destroy, - .get_max_size = esp4_get_max_size, + .get_mtu = esp4_get_mtu, .input = esp_input, .output = esp_output }; diff -puN net/ipv4/fib_frontend.c~git-net net/ipv4/fib_frontend.c --- a/net/ipv4/fib_frontend.c~git-net +++ a/net/ipv4/fib_frontend.c @@ -34,7 +34,6 @@ #include #include #include -#include #include #include @@ -46,6 +45,7 @@ #include #include #include +#include #define FFprint(a...) printk(KERN_DEBUG a) @@ -540,7 +540,7 @@ errout: return err; } -int inet_rtm_delroute(struct sk_buff *skb, struct nlmsghdr* nlh, void *arg) +static int inet_rtm_delroute(struct sk_buff *skb, struct nlmsghdr* nlh, void *arg) { struct fib_config cfg; struct fib_table *tb; @@ -561,7 +561,7 @@ errout: return err; } -int inet_rtm_newroute(struct sk_buff *skb, struct nlmsghdr* nlh, void *arg) +static int inet_rtm_newroute(struct sk_buff *skb, struct nlmsghdr* nlh, void *arg) { struct fib_config cfg; struct fib_table *tb; @@ -582,7 +582,7 @@ errout: return err; } -int inet_dump_fib(struct sk_buff *skb, struct netlink_callback *cb) +static int inet_dump_fib(struct sk_buff *skb, struct netlink_callback *cb) { unsigned int h, s_h; unsigned int e = 0, s_e; @@ -801,7 +801,7 @@ static void nl_fib_input(struct sock *sk struct fib_table *tb; skb = skb_dequeue(&sk->sk_receive_queue); - nlh = (struct nlmsghdr *)skb->data; + nlh = nlmsg_hdr(skb); if (skb->len < NLMSG_SPACE(0) || skb->len < nlh->nlmsg_len || nlh->nlmsg_len < NLMSG_LENGTH(sizeof(*frn))) { kfree_skb(skb); @@ -821,7 +821,8 @@ static void nl_fib_input(struct sock *sk static void nl_fib_lookup_init(void) { - netlink_kernel_create(NETLINK_FIB_LOOKUP, 0, nl_fib_input, THIS_MODULE); + netlink_kernel_create(NETLINK_FIB_LOOKUP, 0, nl_fib_input, NULL, + THIS_MODULE); } static void fib_disable_ip(struct net_device *dev, int force) @@ -919,6 +920,10 @@ void __init ip_fib_init(void) register_netdevice_notifier(&fib_netdev_notifier); register_inetaddr_notifier(&fib_inetaddr_notifier); nl_fib_lookup_init(); + + rtnl_register(PF_INET, RTM_NEWROUTE, inet_rtm_newroute, NULL); + rtnl_register(PF_INET, RTM_DELROUTE, inet_rtm_delroute, NULL); + rtnl_register(PF_INET, RTM_GETROUTE, NULL, inet_dump_fib); } EXPORT_SYMBOL(inet_addr_type); diff -puN net/ipv4/fib_hash.c~git-net net/ipv4/fib_hash.c --- a/net/ipv4/fib_hash.c~git-net +++ a/net/ipv4/fib_hash.c @@ -1029,7 +1029,7 @@ out: return 0; } -static struct seq_operations fib_seq_ops = { +static const struct seq_operations fib_seq_ops = { .start = fib_seq_start, .next = fib_seq_next, .stop = fib_seq_stop, diff -puN net/ipv4/fib_rules.c~git-net net/ipv4/fib_rules.c --- a/net/ipv4/fib_rules.c~git-net +++ a/net/ipv4/fib_rules.c @@ -274,11 +274,6 @@ nla_put_failure: return -ENOBUFS; } -int fib4_rules_dump(struct sk_buff *skb, struct netlink_callback *cb) -{ - return fib_rules_dump(skb, cb, AF_INET); -} - static u32 fib4_rule_default_pref(void) { struct list_head *pos; @@ -303,6 +298,11 @@ static size_t fib4_rule_nlmsg_payload(st + nla_total_size(4); /* flow */ } +static void fib4_rule_flush_cache(void) +{ + rt_cache_flush(-1); +} + static struct fib_rules_ops fib4_rules_ops = { .family = AF_INET, .rule_size = sizeof(struct fib4_rule), @@ -314,6 +314,7 @@ static struct fib_rules_ops fib4_rules_o .fill = fib4_rule_fill, .default_pref = fib4_rule_default_pref, .nlmsg_payload = fib4_rule_nlmsg_payload, + .flush_cache = fib4_rule_flush_cache, .nlgroup = RTNLGRP_IPV4_RULE, .policy = fib4_rule_policy, .rules_list = &fib4_rules, diff -puN net/ipv4/fib_semantics.c~git-net net/ipv4/fib_semantics.c --- a/net/ipv4/fib_semantics.c~git-net +++ a/net/ipv4/fib_semantics.c @@ -928,7 +928,7 @@ int fib_semantic_match(struct list_head default: printk(KERN_DEBUG "impossible 102\n"); return -EINVAL; - }; + } } return err; } diff -puN net/ipv4/fib_trie.c~git-net net/ipv4/fib_trie.c --- a/net/ipv4/fib_trie.c~git-net +++ a/net/ipv4/fib_trie.c @@ -50,7 +50,7 @@ * Patrick McHardy */ -#define VERSION "0.407" +#define VERSION "0.408" #include #include @@ -292,8 +292,8 @@ static inline void check_tnode(const str static int halve_threshold = 25; static int inflate_threshold = 50; -static int halve_threshold_root = 15; -static int inflate_threshold_root = 25; +static int halve_threshold_root = 8; +static int inflate_threshold_root = 15; static void __alias_free_mem(struct rcu_head *head) @@ -350,11 +350,10 @@ static void __tnode_free_rcu(struct rcu_ static inline void tnode_free(struct tnode *tn) { - if(IS_LEAF(tn)) { + if (IS_LEAF(tn)) { struct leaf *l = (struct leaf *) tn; call_rcu_bh(&l->rcu, __leaf_free_rcu); - } - else + } else call_rcu(&tn->rcu, __tnode_free_rcu); } @@ -459,6 +458,7 @@ static struct node *resize(struct trie * struct tnode *old_tn; int inflate_threshold_use; int halve_threshold_use; + int max_resize; if (!tn) return NULL; @@ -553,13 +553,14 @@ static struct node *resize(struct trie * /* Keep root node larger */ - if(!tn->parent) + if (!tn->parent) inflate_threshold_use = inflate_threshold_root; else inflate_threshold_use = inflate_threshold; err = 0; - while ((tn->full_children > 0 && + max_resize = 10; + while ((tn->full_children > 0 && max_resize-- && 50 * (tn->full_children + tnode_child_length(tn) - tn->empty_children) >= inflate_threshold_use * tnode_child_length(tn))) { @@ -574,6 +575,15 @@ static struct node *resize(struct trie * } } + if (max_resize < 0) { + if (!tn->parent) + printk(KERN_WARNING "Fix inflate_threshold_root. Now=%d size=%d bits\n", + inflate_threshold_root, tn->bits); + else + printk(KERN_WARNING "Fix inflate_threshold. Now=%d size=%d bits\n", + inflate_threshold, tn->bits); + } + check_tnode(tn); /* @@ -584,13 +594,14 @@ static struct node *resize(struct trie * /* Keep root node larger */ - if(!tn->parent) + if (!tn->parent) halve_threshold_use = halve_threshold_root; else halve_threshold_use = halve_threshold; err = 0; - while (tn->bits > 1 && + max_resize = 10; + while (tn->bits > 1 && max_resize-- && 100 * (tnode_child_length(tn) - tn->empty_children) < halve_threshold_use * tnode_child_length(tn)) { @@ -605,6 +616,14 @@ static struct node *resize(struct trie * } } + if (max_resize < 0) { + if (!tn->parent) + printk(KERN_WARNING "Fix halve_threshold_root. Now=%d size=%d bits\n", + halve_threshold_root, tn->bits); + else + printk(KERN_WARNING "Fix halve_threshold. Now=%d size=%d bits\n", + halve_threshold, tn->bits); + } /* Only one child remains */ if (tn->empty_children == tnode_child_length(tn) - 1) @@ -2041,12 +2060,12 @@ static struct node *fib_trie_get_first(s { struct node *n ; - if(!t) + if (!t) return NULL; n = rcu_dereference(t->trie); - if(!iter) + if (!iter) return NULL; if (n) { @@ -2086,7 +2105,7 @@ static void trie_collect_stats(struct tr int i; s->tnodes++; - if(tn->bits < MAX_STAT_DEPTH) + if (tn->bits < MAX_STAT_DEPTH) s->nodesizes[tn->bits]++; for (i = 0; i < (1<bits); i++) @@ -2252,7 +2271,7 @@ static inline const char *rtn_scope(enum { static char buf[32]; - switch(s) { + switch (s) { case RT_SCOPE_UNIVERSE: return "universe"; case RT_SCOPE_SITE: return "site"; case RT_SCOPE_LINK: return "link"; @@ -2342,7 +2361,7 @@ static int fib_trie_seq_show(struct seq_ return 0; } -static struct seq_operations fib_trie_seq_ops = { +static const struct seq_operations fib_trie_seq_ops = { .start = fib_trie_seq_start, .next = fib_trie_seq_next, .stop = fib_trie_seq_stop, @@ -2463,7 +2482,7 @@ static int fib_route_seq_show(struct seq return 0; } -static struct seq_operations fib_route_seq_ops = { +static const struct seq_operations fib_route_seq_ops = { .start = fib_trie_seq_start, .next = fib_trie_seq_next, .stop = fib_trie_seq_stop, diff -puN net/ipv4/icmp.c~git-net net/ipv4/icmp.c --- a/net/ipv4/icmp.c~git-net +++ a/net/ipv4/icmp.c @@ -355,7 +355,7 @@ static void icmp_push_reply(struct icmp_ ipc, rt, MSG_DONTWAIT) < 0) ip_flush_pending_frames(icmp_socket->sk); else if ((skb = skb_peek(&icmp_socket->sk->sk_write_queue)) != NULL) { - struct icmphdr *icmph = skb->h.icmph; + struct icmphdr *icmph = icmp_hdr(skb); __wsum csum = 0; struct sk_buff *skb1; @@ -392,7 +392,7 @@ static void icmp_reply(struct icmp_bxm * icmp_param->data.icmph.checksum = 0; icmp_out_count(icmp_param->data.icmph.type); - inet->tos = skb->nh.iph->tos; + inet->tos = ip_hdr(skb)->tos; daddr = ipc.addr = rt->rt_src; ipc.opt = NULL; if (icmp_param->replyopts.optlen) { @@ -404,7 +404,7 @@ static void icmp_reply(struct icmp_bxm * struct flowi fl = { .nl_u = { .ip4_u = { .daddr = daddr, .saddr = rt->rt_spec_dst, - .tos = RT_TOS(skb->nh.iph->tos) } }, + .tos = RT_TOS(ip_hdr(skb)->tos) } }, .proto = IPPROTO_ICMP }; security_skb_classify_flow(skb, &fl); if (ip_route_output_key(&rt, &fl)) @@ -448,9 +448,10 @@ void icmp_send(struct sk_buff *skb_in, i * Check this, icmp_send is called from the most obscure devices * sometimes. */ - iph = skb_in->nh.iph; + iph = ip_hdr(skb_in); - if ((u8 *)iph < skb_in->head || (u8 *)(iph + 1) > skb_in->tail) + if ((u8 *)iph < skb_in->head || + (skb_in->network_header + sizeof(*iph)) > skb_in->tail) goto out; /* @@ -484,7 +485,7 @@ void icmp_send(struct sk_buff *skb_in, i u8 _inner_type, *itp; itp = skb_header_pointer(skb_in, - skb_in->nh.raw + + skb_network_header(skb_in) + (iph->ihl << 2) + offsetof(struct icmphdr, type) - @@ -536,7 +537,7 @@ void icmp_send(struct sk_buff *skb_in, i icmp_param.data.icmph.un.gateway = info; icmp_param.data.icmph.checksum = 0; icmp_param.skb = skb_in; - icmp_param.offset = skb_in->nh.raw - skb_in->data; + icmp_param.offset = skb_network_offset(skb_in); icmp_out_count(icmp_param.data.icmph.type); inet_sk(icmp_socket->sk)->tos = tos; ipc.addr = iph->saddr; @@ -613,7 +614,7 @@ static void icmp_unreach(struct sk_buff if (!pskb_may_pull(skb, sizeof(struct iphdr))) goto out_err; - icmph = skb->h.icmph; + icmph = icmp_hdr(skb); iph = (struct iphdr *)skb->data; if (iph->ihl < 5) /* Mangled header, drop. */ @@ -676,7 +677,7 @@ static void icmp_unreach(struct sk_buff printk(KERN_WARNING "%u.%u.%u.%u sent an invalid ICMP " "type %u, code %u " "error to a broadcast: %u.%u.%u.%u on %s\n", - NIPQUAD(skb->nh.iph->saddr), + NIPQUAD(ip_hdr(skb)->saddr), icmph->type, icmph->code, NIPQUAD(iph->daddr), skb->dev->name); @@ -743,7 +744,7 @@ static void icmp_redirect(struct sk_buff iph = (struct iphdr *)skb->data; - switch (skb->h.icmph->code & 7) { + switch (icmp_hdr(skb)->code & 7) { case ICMP_REDIR_NET: case ICMP_REDIR_NETTOS: /* @@ -751,8 +752,8 @@ static void icmp_redirect(struct sk_buff */ case ICMP_REDIR_HOST: case ICMP_REDIR_HOSTTOS: - ip_rt_redirect(skb->nh.iph->saddr, iph->daddr, - skb->h.icmph->un.gateway, + ip_rt_redirect(ip_hdr(skb)->saddr, iph->daddr, + icmp_hdr(skb)->un.gateway, iph->saddr, skb->dev); break; } @@ -780,7 +781,7 @@ static void icmp_echo(struct sk_buff *sk if (!sysctl_icmp_echo_ignore_all) { struct icmp_bxm icmp_param; - icmp_param.data.icmph = *skb->h.icmph; + icmp_param.data.icmph = *icmp_hdr(skb); icmp_param.data.icmph.type = ICMP_ECHOREPLY; icmp_param.skb = skb; icmp_param.offset = 0; @@ -816,7 +817,7 @@ static void icmp_timestamp(struct sk_buf icmp_param.data.times[2] = icmp_param.data.times[1]; if (skb_copy_bits(skb, 0, &icmp_param.data.times[0], 4)) BUG(); - icmp_param.data.icmph = *skb->h.icmph; + icmp_param.data.icmph = *icmp_hdr(skb); icmp_param.data.icmph.type = ICMP_TIMESTAMPREPLY; icmp_param.data.icmph.code = 0; icmp_param.skb = skb; @@ -943,7 +944,7 @@ int icmp_rcv(struct sk_buff *skb) if (!pskb_pull(skb, sizeof(struct icmphdr))) goto error; - icmph = skb->h.icmph; + icmph = icmp_hdr(skb); /* * 18 is the highest 'known' ICMP type. Anything else is a mystery diff -puN net/ipv4/igmp.c~git-net net/ipv4/igmp.c --- a/net/ipv4/igmp.c~git-net +++ a/net/ipv4/igmp.c @@ -314,7 +314,9 @@ static struct sk_buff *igmpv3_newpack(st skb_reserve(skb, LL_RESERVED_SPACE(dev)); - skb->nh.iph = pip =(struct iphdr *)skb_put(skb, sizeof(struct iphdr)+4); + skb_reset_network_header(skb); + pip = ip_hdr(skb); + skb_put(skb, sizeof(struct iphdr) + 4); pip->version = 4; pip->ihl = (sizeof(struct iphdr)+4)>>2; @@ -331,8 +333,9 @@ static struct sk_buff *igmpv3_newpack(st ((u8*)&pip[1])[2] = 0; ((u8*)&pip[1])[3] = 0; - pig =(struct igmpv3_report *)skb_put(skb, sizeof(*pig)); - skb->h.igmph = (struct igmphdr *)pig; + skb->transport_header = skb->network_header + sizeof(struct iphdr) + 4; + skb_put(skb, sizeof(*pig)); + pig = igmpv3_report_hdr(skb); pig->type = IGMPV3_HOST_MEMBERSHIP_REPORT; pig->resv1 = 0; pig->csum = 0; @@ -343,16 +346,14 @@ static struct sk_buff *igmpv3_newpack(st static int igmpv3_sendpack(struct sk_buff *skb) { - struct iphdr *pip = skb->nh.iph; - struct igmphdr *pig = skb->h.igmph; - int iplen, igmplen; + struct iphdr *pip = ip_hdr(skb); + struct igmphdr *pig = igmp_hdr(skb); + const int iplen = skb->tail - skb->network_header; + const int igmplen = skb->tail - skb->transport_header; - iplen = skb->tail - (unsigned char *)skb->nh.iph; pip->tot_len = htons(iplen); ip_send_check(pip); - - igmplen = skb->tail - (unsigned char *)skb->h.igmph; - pig->csum = ip_compute_csum((void *)skb->h.igmph, igmplen); + pig->csum = ip_compute_csum(igmp_hdr(skb), igmplen); return NF_HOOK(PF_INET, NF_IP_LOCAL_OUT, skb, NULL, skb->dev, dst_output); @@ -379,7 +380,7 @@ static struct sk_buff *add_grhead(struct pgr->grec_auxwords = 0; pgr->grec_nsrcs = 0; pgr->grec_mca = pmc->multiaddr; - pih = (struct igmpv3_report *)skb->h.igmph; + pih = igmpv3_report_hdr(skb); pih->ngrec = htons(ntohs(pih->ngrec)+1); *ppgr = pgr; return skb; @@ -412,7 +413,7 @@ static struct sk_buff *add_grec(struct s if (!*psf_list) goto empty_source; - pih = skb ? (struct igmpv3_report *)skb->h.igmph : NULL; + pih = skb ? igmpv3_report_hdr(skb) : NULL; /* EX and TO_EX get a fresh packet, if needed */ if (truncate) { @@ -664,7 +665,9 @@ static int igmp_send_report(struct in_de skb_reserve(skb, LL_RESERVED_SPACE(dev)); - skb->nh.iph = iph = (struct iphdr *)skb_put(skb, sizeof(struct iphdr)+4); + skb_reset_network_header(skb); + iph = ip_hdr(skb); + skb_put(skb, sizeof(struct iphdr) + 4); iph->version = 4; iph->ihl = (sizeof(struct iphdr)+4)>>2; @@ -827,8 +830,8 @@ static void igmp_heard_report(struct in_ static void igmp_heard_query(struct in_device *in_dev, struct sk_buff *skb, int len) { - struct igmphdr *ih = skb->h.igmph; - struct igmpv3_query *ih3 = (struct igmpv3_query *)ih; + struct igmphdr *ih = igmp_hdr(skb); + struct igmpv3_query *ih3 = igmpv3_query_hdr(skb); struct ip_mc_list *im; __be32 group = ih->group; int max_delay; @@ -861,12 +864,12 @@ static void igmp_heard_query(struct in_d if (!pskb_may_pull(skb, sizeof(struct igmpv3_query))) return; - ih3 = (struct igmpv3_query *) skb->h.raw; + ih3 = igmpv3_query_hdr(skb); if (ih3->nsrcs) { if (!pskb_may_pull(skb, sizeof(struct igmpv3_query) + ntohs(ih3->nsrcs)*sizeof(__be32))) return; - ih3 = (struct igmpv3_query *) skb->h.raw; + ih3 = igmpv3_query_hdr(skb); } max_delay = IGMPV3_MRC(ih3->code)*(HZ/IGMP_TIMER_SCALE); @@ -943,7 +946,7 @@ int igmp_rcv(struct sk_buff *skb) goto drop; } - ih = skb->h.igmph; + ih = igmp_hdr(skb); switch (ih->type) { case IGMP_HOST_MEMBERSHIP_QUERY: igmp_heard_query(in_dev, skb, len); @@ -2397,7 +2400,7 @@ static int igmp_mc_seq_show(struct seq_f return 0; } -static struct seq_operations igmp_mc_seq_ops = { +static const struct seq_operations igmp_mc_seq_ops = { .start = igmp_mc_seq_start, .next = igmp_mc_seq_next, .stop = igmp_mc_seq_stop, @@ -2571,7 +2574,7 @@ static int igmp_mcf_seq_show(struct seq_ return 0; } -static struct seq_operations igmp_mcf_seq_ops = { +static const struct seq_operations igmp_mcf_seq_ops = { .start = igmp_mcf_seq_start, .next = igmp_mcf_seq_next, .stop = igmp_mcf_seq_stop, diff -puN net/ipv4/inet_diag.c~git-net net/ipv4/inet_diag.c --- a/net/ipv4/inet_diag.c~git-net +++ a/net/ipv4/inet_diag.c @@ -27,6 +27,7 @@ #include #include #include +#include #include #include @@ -60,7 +61,7 @@ static int inet_csk_diag_fill(struct soc struct nlmsghdr *nlh; void *info = NULL; struct inet_diag_meminfo *minfo = NULL; - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); const struct inet_diag_handler *handler; handler = inet_diag_table[unlh->nlmsg_type]; @@ -147,12 +148,12 @@ static int inet_csk_diag_fill(struct soc icsk->icsk_ca_ops && icsk->icsk_ca_ops->get_info) icsk->icsk_ca_ops->get_info(sk, ext, skb); - nlh->nlmsg_len = skb->tail - b; + nlh->nlmsg_len = skb_tail_pointer(skb) - b; return skb->len; rtattr_failure: nlmsg_failure: - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); return -EMSGSIZE; } @@ -163,7 +164,7 @@ static int inet_twsk_diag_fill(struct in { long tmo; struct inet_diag_msg *r; - const unsigned char *previous_tail = skb->tail; + const unsigned char *previous_tail = skb_tail_pointer(skb); struct nlmsghdr *nlh = NLMSG_PUT(skb, pid, seq, unlh->nlmsg_type, sizeof(*r)); @@ -205,10 +206,10 @@ static int inet_twsk_diag_fill(struct in &tw6->tw_v6_daddr); } #endif - nlh->nlmsg_len = skb->tail - previous_tail; + nlh->nlmsg_len = skb_tail_pointer(skb) - previous_tail; return skb->len; nlmsg_failure: - skb_trim(skb, previous_tail - skb->data); + nlmsg_trim(skb, previous_tail); return -EMSGSIZE; } @@ -535,7 +536,7 @@ static int inet_diag_fill_req(struct sk_ { const struct inet_request_sock *ireq = inet_rsk(req); struct inet_sock *inet = inet_sk(sk); - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); struct inet_diag_msg *r; struct nlmsghdr *nlh; long tmo; @@ -574,12 +575,12 @@ static int inet_diag_fill_req(struct sk_ &inet6_rsk(req)->rmt_addr); } #endif - nlh->nlmsg_len = skb->tail - b; + nlh->nlmsg_len = skb_tail_pointer(skb) - b; return skb->len; nlmsg_failure: - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); return -1; } @@ -805,68 +806,43 @@ done: return skb->len; } -static inline int inet_diag_rcv_msg(struct sk_buff *skb, struct nlmsghdr *nlh) +static int inet_diag_rcv_msg(struct sk_buff *skb, struct nlmsghdr *nlh) { - if (!(nlh->nlmsg_flags&NLM_F_REQUEST)) - return 0; + int hdrlen = sizeof(struct inet_diag_req); - if (nlh->nlmsg_type >= INET_DIAG_GETSOCK_MAX) - goto err_inval; + if (nlh->nlmsg_type >= INET_DIAG_GETSOCK_MAX || + nlmsg_len(nlh) < hdrlen) + return -EINVAL; if (inet_diag_table[nlh->nlmsg_type] == NULL) return -ENOENT; - if (NLMSG_LENGTH(sizeof(struct inet_diag_req)) > skb->len) - goto err_inval; - - if (nlh->nlmsg_flags&NLM_F_DUMP) { - if (nlh->nlmsg_len > - (4 + NLMSG_SPACE(sizeof(struct inet_diag_req)))) { - struct rtattr *rta = (void *)(NLMSG_DATA(nlh) + - sizeof(struct inet_diag_req)); - if (rta->rta_type != INET_DIAG_REQ_BYTECODE || - rta->rta_len < 8 || - rta->rta_len > - (nlh->nlmsg_len - - NLMSG_SPACE(sizeof(struct inet_diag_req)))) - goto err_inval; - if (inet_diag_bc_audit(RTA_DATA(rta), RTA_PAYLOAD(rta))) - goto err_inval; + if (nlh->nlmsg_flags & NLM_F_DUMP) { + if (nlmsg_attrlen(nlh, hdrlen)) { + struct nlattr *attr; + + attr = nlmsg_find_attr(nlh, hdrlen, + INET_DIAG_REQ_BYTECODE); + if (attr == NULL || + nla_len(attr) < sizeof(struct inet_diag_bc_op) || + inet_diag_bc_audit(nla_data(attr), nla_len(attr))) + return -EINVAL; } + return netlink_dump_start(idiagnl, skb, nlh, inet_diag_dump, NULL); - } else - return inet_diag_get_exact(skb, nlh); - -err_inval: - return -EINVAL; -} - - -static inline void inet_diag_rcv_skb(struct sk_buff *skb) -{ - if (skb->len >= NLMSG_SPACE(0)) { - int err; - struct nlmsghdr *nlh = (struct nlmsghdr *)skb->data; - - if (nlh->nlmsg_len < sizeof(*nlh) || - skb->len < nlh->nlmsg_len) - return; - err = inet_diag_rcv_msg(skb, nlh); - if (err || nlh->nlmsg_flags & NLM_F_ACK) - netlink_ack(skb, nlh, err); } + + return inet_diag_get_exact(skb, nlh); } static void inet_diag_rcv(struct sock *sk, int len) { - struct sk_buff *skb; - unsigned int qlen = skb_queue_len(&sk->sk_receive_queue); + unsigned int qlen = 0; - while (qlen-- && (skb = skb_dequeue(&sk->sk_receive_queue))) { - inet_diag_rcv_skb(skb); - kfree_skb(skb); - } + do { + netlink_run_queue(sk, &qlen, &inet_diag_rcv_msg); + } while (qlen); } static DEFINE_SPINLOCK(inet_diag_register_lock); @@ -917,7 +893,7 @@ static int __init inet_diag_init(void) goto out; idiagnl = netlink_kernel_create(NETLINK_INET_DIAG, 0, inet_diag_rcv, - THIS_MODULE); + NULL, THIS_MODULE); if (idiagnl == NULL) goto out_free_table; err = 0; diff -puN net/ipv4/inetpeer.c~git-net net/ipv4/inetpeer.c --- a/net/ipv4/inetpeer.c~git-net +++ a/net/ipv4/inetpeer.c @@ -87,10 +87,12 @@ static DEFINE_RWLOCK(peer_pool_lock); static int peer_total; /* Exported for sysctl_net_ipv4. */ -int inet_peer_threshold = 65536 + 128; /* start to throw entries more +int inet_peer_threshold __read_mostly = 65536 + 128; /* start to throw entries more * aggressively at this stage */ -int inet_peer_minttl = 120 * HZ; /* TTL under high load: 120 sec */ -int inet_peer_maxttl = 10 * 60 * HZ; /* usual time to live: 10 min */ +int inet_peer_minttl __read_mostly = 120 * HZ; /* TTL under high load: 120 sec */ +int inet_peer_maxttl __read_mostly = 10 * 60 * HZ; /* usual time to live: 10 min */ +int inet_peer_gc_mintime __read_mostly = 10 * HZ; +int inet_peer_gc_maxtime __read_mostly = 120 * HZ; static struct inet_peer *inet_peer_unused_head; static struct inet_peer **inet_peer_unused_tailp = &inet_peer_unused_head; @@ -99,9 +101,6 @@ static DEFINE_SPINLOCK(inet_peer_unused_ static void peer_check_expire(unsigned long dummy); static DEFINE_TIMER(peer_periodic_timer, peer_check_expire, 0, 0); -/* Exported for sysctl_net_ipv4. */ -int inet_peer_gc_mintime = 10 * HZ, - inet_peer_gc_maxtime = 120 * HZ; /* Called from ip_output.c:ip_init */ void __init inet_initpeers(void) @@ -151,20 +150,27 @@ static void unlink_from_unused(struct in spin_unlock_bh(&inet_peer_unused_lock); } -/* Called with local BH disabled and the pool lock held. */ -#define lookup(daddr) \ +/* + * Called with local BH disabled and the pool lock held. + * _stack is known to be NULL or not at compile time, + * so compiler will optimize the if (_stack) tests. + */ +#define lookup(_daddr,_stack) \ ({ \ struct inet_peer *u, **v; \ - stackptr = stack; \ - *stackptr++ = &peer_root; \ + if (_stack) { \ + stackptr = _stack; \ + *stackptr++ = &peer_root; \ + } \ for (u = peer_root; u != peer_avl_empty; ) { \ - if (daddr == u->v4daddr) \ + if (_daddr == u->v4daddr) \ break; \ - if ((__force __u32)daddr < (__force __u32)u->v4daddr) \ + if ((__force __u32)_daddr < (__force __u32)u->v4daddr) \ v = &u->avl_left; \ else \ v = &u->avl_right; \ - *stackptr++ = v; \ + if (_stack) \ + *stackptr++ = v; \ u = *v; \ } \ u; \ @@ -288,7 +294,7 @@ static void unlink_from_pool(struct inet if (atomic_read(&p->refcnt) == 1) { struct inet_peer **stack[PEER_MAXDEPTH]; struct inet_peer ***stackptr, ***delp; - if (lookup(p->v4daddr) != p) + if (lookup(p->v4daddr, stack) != p) BUG(); delp = stackptr - 1; /* *delp[0] == p */ if (p->avl_left == peer_avl_empty) { @@ -373,7 +379,7 @@ struct inet_peer *inet_getpeer(__be32 da /* Look up for the address quickly. */ read_lock_bh(&peer_pool_lock); - p = lookup(daddr); + p = lookup(daddr, NULL); if (p != peer_avl_empty) atomic_inc(&p->refcnt); read_unlock_bh(&peer_pool_lock); @@ -400,7 +406,7 @@ struct inet_peer *inet_getpeer(__be32 da write_lock_bh(&peer_pool_lock); /* Check if an entry has suddenly appeared. */ - p = lookup(daddr); + p = lookup(daddr, stack); if (p != peer_avl_empty) goto out_free; diff -puN net/ipv4/ip_forward.c~git-net net/ipv4/ip_forward.c --- a/net/ipv4/ip_forward.c~git-net +++ a/net/ipv4/ip_forward.c @@ -67,14 +67,14 @@ int ip_forward(struct sk_buff *skb) if (skb->pkt_type != PACKET_HOST) goto drop; - skb->ip_summed = CHECKSUM_NONE; + skb_forward_csum(skb); /* * According to the RFC, we must first decrease the TTL field. If * that reaches zero, we must reply an ICMP control message telling * that the packet's lifetime expired. */ - if (skb->nh.iph->ttl <= 1) + if (ip_hdr(skb)->ttl <= 1) goto too_many_hops; if (!xfrm4_route_forward(skb)) @@ -85,10 +85,18 @@ int ip_forward(struct sk_buff *skb) if (opt->is_strictroute && rt->rt_dst != rt->rt_gateway) goto sr_failed; + if (unlikely(skb->len > dst_mtu(&rt->u.dst) && + (ip_hdr(skb)->frag_off & htons(IP_DF))) && !skb->local_df) { + IP_INC_STATS(IPSTATS_MIB_FRAGFAILS); + icmp_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED, + htonl(dst_mtu(&rt->u.dst))); + goto drop; + } + /* We are about to mangle packet. Copy it! */ if (skb_cow(skb, LL_RESERVED_SPACE(rt->u.dst.dev)+rt->u.dst.header_len)) goto drop; - iph = skb->nh.iph; + iph = ip_hdr(skb); /* Decrease ttl after skb cow done */ ip_decrease_ttl(iph); diff -puN net/ipv4/ip_fragment.c~git-net net/ipv4/ip_fragment.c --- a/net/ipv4/ip_fragment.c~git-net +++ a/net/ipv4/ip_fragment.c @@ -92,7 +92,7 @@ struct ipq { spinlock_t lock; atomic_t refcnt; struct timer_list timer; /* when will this queue expire? */ - struct timeval stamp; + ktime_t stamp; int iif; unsigned int rid; struct inet_peer *peer; @@ -184,7 +184,7 @@ static __inline__ struct ipq *frag_alloc { struct ipq *qp = kmalloc(sizeof(struct ipq), GFP_ATOMIC); - if(!qp) + if (!qp) return NULL; atomic_add(sizeof(struct ipq), &ip_frag_mem); return qp; @@ -321,11 +321,11 @@ static struct ipq *ip_frag_intern(struct * promoted read lock to write lock. */ hlist_for_each_entry(qp, n, &ipq_hash[hash], list) { - if(qp->id == qp_in->id && - qp->saddr == qp_in->saddr && - qp->daddr == qp_in->daddr && - qp->protocol == qp_in->protocol && - qp->user == qp_in->user) { + if (qp->id == qp_in->id && + qp->saddr == qp_in->saddr && + qp->daddr == qp_in->daddr && + qp->protocol == qp_in->protocol && + qp->user == qp_in->user) { atomic_inc(&qp->refcnt); write_unlock(&ipfrag_lock); qp_in->last_in |= COMPLETE; @@ -398,11 +398,11 @@ static inline struct ipq *ip_find(struct read_lock(&ipfrag_lock); hash = ipqhashfn(id, saddr, daddr, protocol); hlist_for_each_entry(qp, n, &ipq_hash[hash], list) { - if(qp->id == id && - qp->saddr == saddr && - qp->daddr == daddr && - qp->protocol == protocol && - qp->user == user) { + if (qp->id == id && + qp->saddr == saddr && + qp->daddr == daddr && + qp->protocol == protocol && + qp->user == user) { atomic_inc(&qp->refcnt); read_unlock(&ipfrag_lock); return qp; @@ -479,11 +479,11 @@ static void ip_frag_queue(struct ipq *qp goto err; } - offset = ntohs(skb->nh.iph->frag_off); + offset = ntohs(ip_hdr(skb)->frag_off); flags = offset & ~IP_OFFSET; offset &= IP_OFFSET; offset <<= 3; /* offset is in 8-byte chunks */ - ihl = skb->nh.iph->ihl * 4; + ihl = ip_hdrlen(skb); /* Determine the position of this fragment. */ end = offset + skb->len - ihl; @@ -524,7 +524,7 @@ static void ip_frag_queue(struct ipq *qp * this fragment, right? */ prev = NULL; - for(next = qp->fragments; next != NULL; next = next->next) { + for (next = qp->fragments; next != NULL; next = next->next) { if (FRAG_CB(next)->offset >= offset) break; /* bingo! */ prev = next; @@ -592,7 +592,7 @@ static void ip_frag_queue(struct ipq *qp if (skb->dev) qp->iif = skb->dev->ifindex; skb->dev = NULL; - skb_get_timestamp(skb, &qp->stamp); + qp->stamp = skb->tstamp; qp->meat += skb->len; atomic_add(skb->truesize, &ip_frag_mem); if (offset == 0) @@ -624,10 +624,10 @@ static struct sk_buff *ip_frag_reasm(str BUG_TRAP(FRAG_CB(head)->offset == 0); /* Allocate a new buffer for the datagram. */ - ihlen = head->nh.iph->ihl*4; + ihlen = ip_hdrlen(head); len = ihlen + qp->len; - if(len > 65535) + if (len > 65535) goto out_oversize; /* Head of list must not be cloned. */ @@ -658,7 +658,7 @@ static struct sk_buff *ip_frag_reasm(str } skb_shinfo(head)->frag_list = head->next; - skb_push(head, head->data - head->nh.raw); + skb_push(head, head->data - skb_network_header(head)); atomic_sub(head->truesize, &ip_frag_mem); for (fp=head->next; fp; fp = fp->next) { @@ -674,9 +674,9 @@ static struct sk_buff *ip_frag_reasm(str head->next = NULL; head->dev = dev; - skb_set_timestamp(head, &qp->stamp); + head->tstamp = qp->stamp; - iph = head->nh.iph; + iph = ip_hdr(head); iph->frag_off = 0; iph->tot_len = htons(len); IP_INC_STATS_BH(IPSTATS_MIB_REASMOKS); @@ -700,7 +700,6 @@ out_fail: /* Process an incoming IP datagram fragment. */ struct sk_buff *ip_defrag(struct sk_buff *skb, u32 user) { - struct iphdr *iph = skb->nh.iph; struct ipq *qp; struct net_device *dev; @@ -713,7 +712,7 @@ struct sk_buff *ip_defrag(struct sk_buff dev = skb->dev; /* Lookup (or create) queue header */ - if ((qp = ip_find(iph, user)) != NULL) { + if ((qp = ip_find(ip_hdr(skb), user)) != NULL) { struct sk_buff *ret = NULL; spin_lock(&qp->lock); @@ -734,7 +733,7 @@ struct sk_buff *ip_defrag(struct sk_buff return NULL; } -void ipfrag_init(void) +void __init ipfrag_init(void) { ipfrag_hash_rnd = (u32) ((num_physpages ^ (num_physpages>>7)) ^ (jiffies ^ (jiffies >> 6))); diff -puN net/ipv4/ip_gre.c~git-net net/ipv4/ip_gre.c --- a/net/ipv4/ip_gre.c~git-net +++ a/net/ipv4/ip_gre.c @@ -320,8 +320,8 @@ static void ipgre_err(struct sk_buff *sk struct iphdr *iph = (struct iphdr*)skb->data; __be16 *p = (__be16*)(skb->data+(iph->ihl<<2)); int grehlen = (iph->ihl<<2) + 4; - int type = skb->h.icmph->type; - int code = skb->h.icmph->code; + const int type = icmp_hdr(skb)->type; + const int code = icmp_hdr(skb)->code; struct ip_tunnel *t; __be16 flags; @@ -388,8 +388,8 @@ out: struct iphdr *iph = (struct iphdr*)dp; struct iphdr *eiph; __be16 *p = (__be16*)(dp+(iph->ihl<<2)); - int type = skb->h.icmph->type; - int code = skb->h.icmph->code; + const int type = icmp_hdr(skb)->type; + const int code = icmp_hdr(skb)->code; int rel_type = 0; int rel_code = 0; __be32 rel_info = 0; @@ -422,7 +422,7 @@ out: default: return; case ICMP_PARAMETERPROB: - n = ntohl(skb->h.icmph->un.gateway) >> 24; + n = ntohl(icmp_hdr(skb)->un.gateway) >> 24; if (n < (iph->ihl<<2)) return; @@ -442,7 +442,7 @@ out: return; case ICMP_FRAG_NEEDED: /* And it is the only really necessary thing :-) */ - n = ntohs(skb->h.icmph->un.frag.mtu); + n = ntohs(icmp_hdr(skb)->un.frag.mtu); if (n < grehlen+68) return; n -= grehlen; @@ -474,7 +474,7 @@ out: dst_release(skb2->dst); skb2->dst = NULL; skb_pull(skb2, skb->data - (u8*)eiph); - skb2->nh.raw = skb2->data; + skb_reset_network_header(skb2); /* Try to guess incoming interface */ memset(&fl, 0, sizeof(fl)); @@ -533,9 +533,9 @@ static inline void ipgre_ecn_decapsulate { if (INET_ECN_is_ce(iph->tos)) { if (skb->protocol == htons(ETH_P_IP)) { - IP_ECN_set_ce(skb->nh.iph); + IP_ECN_set_ce(ip_hdr(skb)); } else if (skb->protocol == htons(ETH_P_IPV6)) { - IP6_ECN_set_ce(skb->nh.ipv6h); + IP6_ECN_set_ce(ipv6_hdr(skb)); } } } @@ -565,7 +565,7 @@ static int ipgre_rcv(struct sk_buff *skb if (!pskb_may_pull(skb, 16)) goto drop_nolock; - iph = skb->nh.iph; + iph = ip_hdr(skb); h = skb->data; flags = *(__be16*)h; @@ -616,9 +616,10 @@ static int ipgre_rcv(struct sk_buff *skb offset += 4; } - skb->mac.raw = skb->nh.raw; - skb->nh.raw = __pskb_pull(skb, offset); - skb_postpull_rcsum(skb, skb->h.raw, offset); + skb_reset_mac_header(skb); + __pskb_pull(skb, offset); + skb_reset_network_header(skb); + skb_postpull_rcsum(skb, skb_transport_header(skb), offset); skb->pkt_type = PACKET_HOST; #ifdef CONFIG_NET_IPGRE_BROADCAST if (MULTICAST(iph->daddr)) { @@ -669,7 +670,7 @@ static int ipgre_tunnel_xmit(struct sk_b { struct ip_tunnel *tunnel = netdev_priv(dev); struct net_device_stats *stats = &tunnel->stat; - struct iphdr *old_iph = skb->nh.iph; + struct iphdr *old_iph = ip_hdr(skb); struct iphdr *tiph; u8 tos; __be16 df; @@ -720,7 +721,7 @@ static int ipgre_tunnel_xmit(struct sk_b addr_type = ipv6_addr_type(addr6); if (addr_type == IPV6_ADDR_ANY) { - addr6 = &skb->nh.ipv6h->daddr; + addr6 = &ipv6_hdr(skb)->daddr; addr_type = ipv6_addr_type(addr6); } @@ -824,11 +825,12 @@ static int ipgre_tunnel_xmit(struct sk_b skb_set_owner_w(new_skb, skb->sk); dev_kfree_skb(skb); skb = new_skb; - old_iph = skb->nh.iph; + old_iph = ip_hdr(skb); } - skb->h.raw = skb->nh.raw; - skb->nh.raw = skb_push(skb, gre_hlen); + skb->transport_header = skb->network_header; + skb_push(skb, gre_hlen); + skb_reset_network_header(skb); memset(&(IPCB(skb)->opt), 0, sizeof(IPCB(skb)->opt)); IPCB(skb)->flags &= ~(IPSKB_XFRM_TUNNEL_SIZE | IPSKB_XFRM_TRANSFORMED | IPSKB_REROUTED); @@ -839,7 +841,7 @@ static int ipgre_tunnel_xmit(struct sk_b * Push down and install the IPIP header. */ - iph = skb->nh.iph; + iph = ip_hdr(skb); iph->version = 4; iph->ihl = sizeof(struct iphdr) >> 2; iph->frag_off = df; diff -puN net/ipv4/ip_input.c~git-net net/ipv4/ip_input.c --- a/net/ipv4/ip_input.c~git-net +++ a/net/ipv4/ip_input.c @@ -158,7 +158,7 @@ DEFINE_SNMP_STAT(struct ipstats_mib, ip_ int ip_call_ra_chain(struct sk_buff *skb) { struct ip_ra_chain *ra; - u8 protocol = skb->nh.iph->protocol; + u8 protocol = ip_hdr(skb)->protocol; struct sock *last = NULL; read_lock(&ip_ra_lock); @@ -171,7 +171,7 @@ int ip_call_ra_chain(struct sk_buff *skb if (sk && inet_sk(sk)->num == protocol && (!sk->sk_bound_dev_if || sk->sk_bound_dev_if == skb->dev->ifindex)) { - if (skb->nh.iph->frag_off & htons(IP_MF|IP_OFFSET)) { + if (ip_hdr(skb)->frag_off & htons(IP_MF | IP_OFFSET)) { skb = ip_defrag(skb, IP_DEFRAG_CALL_RA_CHAIN); if (skb == NULL) { read_unlock(&ip_ra_lock); @@ -198,17 +198,15 @@ int ip_call_ra_chain(struct sk_buff *skb static inline int ip_local_deliver_finish(struct sk_buff *skb) { - int ihl = skb->nh.iph->ihl*4; - - __skb_pull(skb, ihl); + __skb_pull(skb, ip_hdrlen(skb)); /* Point into the IP datagram, just past the header. */ - skb->h.raw = skb->data; + skb_reset_transport_header(skb); rcu_read_lock(); { /* Note: See raw.c and net/raw.h, RAWV4_HTABLE_SIZE==MAX_INET_PROTOS */ - int protocol = skb->nh.iph->protocol; + int protocol = ip_hdr(skb)->protocol; int hash; struct sock *raw_sk; struct net_protocol *ipprot; @@ -220,7 +218,7 @@ static inline int ip_local_deliver_finis /* If there maybe a raw socket we must check - if not we * don't care less */ - if (raw_sk && !raw_v4_input(skb, skb->nh.iph, hash)) + if (raw_sk && !raw_v4_input(skb, ip_hdr(skb), hash)) raw_sk = NULL; if ((ipprot = rcu_dereference(inet_protos[hash])) != NULL) { @@ -266,7 +264,7 @@ int ip_local_deliver(struct sk_buff *skb * Reassemble IP fragments. */ - if (skb->nh.iph->frag_off & htons(IP_MF|IP_OFFSET)) { + if (ip_hdr(skb)->frag_off & htons(IP_MF | IP_OFFSET)) { skb = ip_defrag(skb, IP_DEFRAG_LOCAL_DELIVER); if (!skb) return 0; @@ -294,7 +292,7 @@ static inline int ip_rcv_options(struct goto drop; } - iph = skb->nh.iph; + iph = ip_hdr(skb); if (ip_options_compile(NULL, skb)) { IP_INC_STATS_BH(IPSTATS_MIB_INHDRERRORS); @@ -330,7 +328,7 @@ drop: static inline int ip_rcv_finish(struct sk_buff *skb) { - struct iphdr *iph = skb->nh.iph; + const struct iphdr *iph = ip_hdr(skb); /* * Initialise the virtual path cache for the packet. It describes @@ -391,7 +389,7 @@ int ip_rcv(struct sk_buff *skb, struct n if (!pskb_may_pull(skb, sizeof(struct iphdr))) goto inhdr_error; - iph = skb->nh.iph; + iph = ip_hdr(skb); /* * RFC1122: 3.1.2.2 MUST silently discard any IP frame that fails the checksum. @@ -410,7 +408,7 @@ int ip_rcv(struct sk_buff *skb, struct n if (!pskb_may_pull(skb, iph->ihl*4)) goto inhdr_error; - iph = skb->nh.iph; + iph = ip_hdr(skb); if (unlikely(ip_fast_csum((u8 *)iph, iph->ihl))) goto inhdr_error; diff -puN net/ipv4/ip_options.c~git-net net/ipv4/ip_options.c --- a/net/ipv4/ip_options.c~git-net +++ a/net/ipv4/ip_options.c @@ -40,7 +40,7 @@ void ip_options_build(struct sk_buff * skb, struct ip_options * opt, __be32 daddr, struct rtable *rt, int is_frag) { - unsigned char * iph = skb->nh.raw; + unsigned char *iph = skb_network_header(skb); memcpy(&(IPCB(skb)->opt), opt, sizeof(struct ip_options)); memcpy(iph+sizeof(struct iphdr), opt->__data, opt->optlen); @@ -104,13 +104,13 @@ int ip_options_echo(struct ip_options * return 0; } - sptr = skb->nh.raw; + sptr = skb_network_header(skb); dptr = dopt->__data; if (skb->dst) daddr = ((struct rtable*)skb->dst)->rt_spec_dst; else - daddr = skb->nh.iph->daddr; + daddr = ip_hdr(skb)->daddr; if (sopt->rr) { optlen = sptr[sopt->rr+1]; @@ -180,7 +180,8 @@ int ip_options_echo(struct ip_options * /* * RFC1812 requires to fix illegal source routes. */ - if (memcmp(&skb->nh.iph->saddr, &start[soffset+3], 4) == 0) + if (memcmp(&ip_hdr(skb)->saddr, + &start[soffset + 3], 4) == 0) doffset -= 4; } if (doffset > 3) { @@ -217,7 +218,7 @@ int ip_options_echo(struct ip_options * void ip_options_fragment(struct sk_buff * skb) { - unsigned char * optptr = skb->nh.raw + sizeof(struct iphdr); + unsigned char *optptr = skb_network_header(skb) + sizeof(struct iphdr); struct ip_options * opt = &(IPCB(skb)->opt); int l = opt->optlen; int optlen; @@ -264,12 +265,13 @@ int ip_options_compile(struct ip_options if (!opt) { opt = &(IPCB(skb)->opt); - iph = skb->nh.raw; + iph = skb_network_header(skb); opt->optlen = ((struct iphdr *)iph)->ihl*4 - sizeof(struct iphdr); optptr = iph + sizeof(struct iphdr); opt->is_data = 0; } else { - optptr = opt->is_data ? opt->__data : (unsigned char*)&(skb->nh.iph[1]); + optptr = opt->is_data ? opt->__data : + (unsigned char *)&(ip_hdr(skb)[1]); iph = optptr - sizeof(struct iphdr); } @@ -563,7 +565,7 @@ void ip_forward_options(struct sk_buff * struct ip_options * opt = &(IPCB(skb)->opt); unsigned char * optptr; struct rtable *rt = (struct rtable*)skb->dst; - unsigned char *raw = skb->nh.raw; + unsigned char *raw = skb_network_header(skb); if (opt->rr_needaddr) { optptr = (unsigned char *)raw + opt->rr; @@ -587,7 +589,7 @@ void ip_forward_options(struct sk_buff * if (srrptr + 3 <= srrspace) { opt->is_changed = 1; ip_rt_get_source(&optptr[srrptr-1], rt); - skb->nh.iph->daddr = rt->rt_dst; + ip_hdr(skb)->daddr = rt->rt_dst; optptr[2] = srrptr+4; } else if (net_ratelimit()) printk(KERN_CRIT "ip_forward(): Argh! Destination lost!\n"); @@ -599,7 +601,7 @@ void ip_forward_options(struct sk_buff * } if (opt->is_changed) { opt->is_changed = 0; - ip_send_check(skb->nh.iph); + ip_send_check(ip_hdr(skb)); } } @@ -608,8 +610,8 @@ int ip_options_rcv_srr(struct sk_buff *s struct ip_options *opt = &(IPCB(skb)->opt); int srrspace, srrptr; __be32 nexthop; - struct iphdr *iph = skb->nh.iph; - unsigned char * optptr = skb->nh.raw + opt->srr; + struct iphdr *iph = ip_hdr(skb); + unsigned char *optptr = skb_network_header(skb) + opt->srr; struct rtable *rt = (struct rtable*)skb->dst; struct rtable *rt2; int err; diff -puN net/ipv4/ip_output.c~git-net net/ipv4/ip_output.c --- a/net/ipv4/ip_output.c~git-net +++ a/net/ipv4/ip_output.c @@ -95,8 +95,8 @@ __inline__ void ip_send_check(struct iph /* dev_loopback_xmit for use with netfilter. */ static int ip_dev_loopback_xmit(struct sk_buff *newskb) { - newskb->mac.raw = newskb->data; - __skb_pull(newskb, newskb->nh.raw - newskb->data); + skb_reset_mac_header(newskb); + __skb_pull(newskb, skb_network_offset(newskb)); newskb->pkt_type = PACKET_LOOPBACK; newskb->ip_summed = CHECKSUM_UNNECESSARY; BUG_TRAP(newskb->dst); @@ -125,11 +125,9 @@ int ip_build_and_send_pkt(struct sk_buff struct iphdr *iph; /* Build the IP header. */ - if (opt) - iph=(struct iphdr *)skb_push(skb,sizeof(struct iphdr) + opt->optlen); - else - iph=(struct iphdr *)skb_push(skb,sizeof(struct iphdr)); - + skb_push(skb, sizeof(struct iphdr) + (opt ? opt->optlen : 0)); + skb_reset_network_header(skb); + iph = ip_hdr(skb); iph->version = 4; iph->ihl = 5; iph->tos = inet->tos; @@ -143,7 +141,6 @@ int ip_build_and_send_pkt(struct sk_buff iph->protocol = sk->sk_protocol; iph->tot_len = htons(skb->len); ip_select_ident(iph, &rt->u.dst, sk); - skb->nh.iph = iph; if (opt && opt->optlen) { iph->ihl += opt->optlen>>2; @@ -192,6 +189,14 @@ static inline int ip_finish_output2(stru return -EINVAL; } +static inline int ip_skb_dst_mtu(struct sk_buff *skb) +{ + struct inet_sock *inet = skb->sk ? inet_sk(skb->sk) : NULL; + + return (inet && inet->pmtudisc == IP_PMTUDISC_PROBE) ? + skb->dst->dev->mtu : dst_mtu(skb->dst); +} + static inline int ip_finish_output(struct sk_buff *skb) { #if defined(CONFIG_NETFILTER) && defined(CONFIG_XFRM) @@ -201,7 +206,7 @@ static inline int ip_finish_output(struc return dst_output(skb); } #endif - if (skb->len > dst_mtu(skb->dst) && !skb_is_gso(skb)) + if (skb->len > ip_skb_dst_mtu(skb) && !skb_is_gso(skb)) return ip_fragment(skb, ip_finish_output2); else return ip_finish_output2(skb); @@ -248,7 +253,7 @@ int ip_mc_output(struct sk_buff *skb) /* Multicasts with ttl 0 must not go beyond the host */ - if (skb->nh.iph->ttl == 0) { + if (ip_hdr(skb)->ttl == 0) { kfree_skb(skb); return 0; } @@ -333,7 +338,9 @@ packet_routed: goto no_route; /* OK, we know where to send it, allocate and build IP header. */ - iph = (struct iphdr *) skb_push(skb, sizeof(struct iphdr) + (opt ? opt->optlen : 0)); + skb_push(skb, sizeof(struct iphdr) + (opt ? opt->optlen : 0)); + skb_reset_network_header(skb); + iph = ip_hdr(skb); *((__be16 *)iph) = htons((4 << 12) | (5 << 8) | (inet->tos & 0xff)); iph->tot_len = htons(skb->len); if (ip_dont_fragment(sk, &rt->u.dst) && !ipfragok) @@ -344,7 +351,6 @@ packet_routed: iph->protocol = sk->sk_protocol; iph->saddr = rt->rt_src; iph->daddr = rt->rt_dst; - skb->nh.iph = iph; /* Transport layer set skb->h.foo itself. */ if (opt && opt->optlen) { @@ -386,21 +392,10 @@ static void ip_copy_metadata(struct sk_b #ifdef CONFIG_NET_SCHED to->tc_index = from->tc_index; #endif -#ifdef CONFIG_NETFILTER - /* Connection association is same as pre-frag packet */ - nf_conntrack_put(to->nfct); - to->nfct = from->nfct; - nf_conntrack_get(to->nfct); - to->nfctinfo = from->nfctinfo; + nf_copy(to, from); #if defined(CONFIG_IP_VS) || defined(CONFIG_IP_VS_MODULE) to->ipvs_property = from->ipvs_property; #endif -#ifdef CONFIG_BRIDGE_NETFILTER - nf_bridge_put(to->nf_bridge); - to->nf_bridge = from->nf_bridge; - nf_bridge_get(to->nf_bridge); -#endif -#endif skb_copy_secmark(to, from); } @@ -430,12 +425,12 @@ int ip_fragment(struct sk_buff *skb, int * Point into the IP datagram header. */ - iph = skb->nh.iph; + iph = ip_hdr(skb); if (unlikely((iph->frag_off & htons(IP_DF)) && !skb->local_df)) { IP_INC_STATS(IPSTATS_MIB_FRAGFAILS); icmp_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED, - htonl(dst_mtu(&rt->u.dst))); + htonl(ip_skb_dst_mtu(skb))); kfree_skb(skb); return -EMSGSIZE; } @@ -502,10 +497,11 @@ int ip_fragment(struct sk_buff *skb, int * before previous one went down. */ if (frag) { frag->ip_summed = CHECKSUM_NONE; - frag->h.raw = frag->data; - frag->nh.raw = __skb_push(frag, hlen); - memcpy(frag->nh.raw, iph, hlen); - iph = frag->nh.iph; + skb_reset_transport_header(frag); + __skb_push(frag, hlen); + skb_reset_network_header(frag); + memcpy(skb_network_header(frag), iph, hlen); + iph = ip_hdr(frag); iph->tot_len = htons(frag->len); ip_copy_metadata(frag, skb); if (offset == 0) @@ -566,7 +562,7 @@ slow_path: * Keep copying data until we run out. */ - while(left > 0) { + while (left > 0) { len = left; /* IF: it doesn't fit, use 'mtu' - the data space left */ if (len > mtu) @@ -593,8 +589,8 @@ slow_path: ip_copy_metadata(skb2, skb); skb_reserve(skb2, ll_rs); skb_put(skb2, len + hlen); - skb2->nh.raw = skb2->data; - skb2->h.raw = skb2->data + hlen; + skb_reset_network_header(skb2); + skb2->transport_header = skb2->network_header + hlen; /* * Charge the memory for the fragment to any owner @@ -608,19 +604,19 @@ slow_path: * Copy the packet header into the new buffer. */ - memcpy(skb2->nh.raw, skb->data, hlen); + skb_copy_from_linear_data(skb, skb_network_header(skb2), hlen); /* * Copy a block of the IP datagram. */ - if (skb_copy_bits(skb, ptr, skb2->h.raw, len)) + if (skb_copy_bits(skb, ptr, skb_transport_header(skb2), len)) BUG(); left -= len; /* * Fill in the new header fields. */ - iph = skb2->nh.iph; + iph = ip_hdr(skb2); iph->frag_off = htons((offset >> 3)); /* ANK: dirty, but effective trick. Upgrade options only if @@ -722,10 +718,10 @@ static inline int ip_ufo_append_data(str skb_put(skb,fragheaderlen + transhdrlen); /* initialize network header pointer */ - skb->nh.raw = skb->data; + skb_reset_network_header(skb); /* initialize protocol header pointer */ - skb->h.raw = skb->data + fragheaderlen; + skb->transport_header = skb->network_header + fragheaderlen; skb->ip_summed = CHECKSUM_PARTIAL; skb->csum = 0; @@ -799,7 +795,9 @@ int ip_append_data(struct sock *sk, inet->cork.addr = ipc->addr; } dst_hold(&rt->u.dst); - inet->cork.fragsize = mtu = dst_mtu(rt->u.dst.path); + inet->cork.fragsize = mtu = inet->pmtudisc == IP_PMTUDISC_PROBE ? + rt->u.dst.dev->mtu : + dst_mtu(rt->u.dst.path); inet->cork.rt = rt; inet->cork.length = 0; sk->sk_sndmsg_page = NULL; @@ -929,9 +927,10 @@ alloc_new_skb: * Find where to start putting bytes. */ data = skb_put(skb, fraglen); - skb->nh.raw = data + exthdrlen; + skb_set_network_header(skb, exthdrlen); + skb->transport_header = (skb->network_header + + fragheaderlen); data += fragheaderlen; - skb->h.raw = data + exthdrlen; if (fraggap) { skb->csum = skb_copy_and_csum_bits( @@ -1100,8 +1099,6 @@ ssize_t ip_append_page(struct sock *sk, } if (len <= 0) { struct sk_buff *skb_prev; - char *data; - struct iphdr *iph; int alloclen; skb_prev = skb; @@ -1124,15 +1121,15 @@ ssize_t ip_append_page(struct sock *sk, /* * Find where to start putting bytes. */ - data = skb_put(skb, fragheaderlen + fraggap); - skb->nh.iph = iph = (struct iphdr *)data; - data += fragheaderlen; - skb->h.raw = data; - + skb_put(skb, fragheaderlen + fraggap); + skb_reset_network_header(skb); + skb->transport_header = (skb->network_header + + fragheaderlen); if (fraggap) { - skb->csum = skb_copy_and_csum_bits( - skb_prev, maxfraglen, - data, fraggap, 0); + skb->csum = skb_copy_and_csum_bits(skb_prev, + maxfraglen, + skb_transport_header(skb), + fraggap, 0); skb_prev->csum = csum_sub(skb_prev->csum, skb->csum); pskb_trim_unique(skb_prev, maxfraglen); @@ -1198,10 +1195,10 @@ int ip_push_pending_frames(struct sock * tail_skb = &(skb_shinfo(skb)->frag_list); /* move skb->data to ip header from ext header */ - if (skb->data < skb->nh.raw) - __skb_pull(skb, skb->nh.raw - skb->data); + if (skb->data < skb_network_header(skb)) + __skb_pull(skb, skb_network_offset(skb)); while ((tmp_skb = __skb_dequeue(&sk->sk_write_queue)) != NULL) { - __skb_pull(tmp_skb, skb->h.raw - skb->nh.raw); + __skb_pull(tmp_skb, skb_network_header_len(skb)); *tail_skb = tmp_skb; tail_skb = &(tmp_skb->next); skb->len += tmp_skb->len; @@ -1216,13 +1213,13 @@ int ip_push_pending_frames(struct sock * * to fragment the frame generated here. No matter, what transforms * how transforms change size of the packet, it will come out. */ - if (inet->pmtudisc != IP_PMTUDISC_DO) + if (inet->pmtudisc < IP_PMTUDISC_DO) skb->local_df = 1; /* DF bit is set when we want to see DF on outgoing frames. * If local_df is set too, we still allow to fragment this frame * locally. */ - if (inet->pmtudisc == IP_PMTUDISC_DO || + if (inet->pmtudisc >= IP_PMTUDISC_DO || (skb->len <= dst_mtu(&rt->u.dst) && ip_dont_fragment(sk, &rt->u.dst))) df = htons(IP_DF); @@ -1352,11 +1349,11 @@ void ip_send_reply(struct sock *sk, stru struct flowi fl = { .nl_u = { .ip4_u = { .daddr = daddr, .saddr = rt->rt_spec_dst, - .tos = RT_TOS(skb->nh.iph->tos) } }, + .tos = RT_TOS(ip_hdr(skb)->tos) } }, /* Not quite clean, but right. */ .uli_u = { .ports = - { .sport = skb->h.th->dest, - .dport = skb->h.th->source } }, + { .sport = tcp_hdr(skb)->dest, + .dport = tcp_hdr(skb)->source } }, .proto = sk->sk_protocol }; security_skb_classify_flow(skb, &fl); if (ip_route_output_key(&rt, &fl)) @@ -1370,14 +1367,16 @@ void ip_send_reply(struct sock *sk, stru with locally disabled BH and that sk cannot be already spinlocked. */ bh_lock_sock(sk); - inet->tos = skb->nh.iph->tos; + inet->tos = ip_hdr(skb)->tos; sk->sk_priority = skb->priority; - sk->sk_protocol = skb->nh.iph->protocol; + sk->sk_protocol = ip_hdr(skb)->protocol; ip_append_data(sk, ip_reply_glue_bits, arg->iov->iov_base, len, 0, &ipc, rt, MSG_DONTWAIT); if ((skb = skb_peek(&sk->sk_write_queue)) != NULL) { if (arg->csumoffset >= 0) - *((__sum16 *)skb->h.raw + arg->csumoffset) = csum_fold(csum_add(skb->csum, arg->csum)); + *((__sum16 *)skb_transport_header(skb) + + arg->csumoffset) = csum_fold(csum_add(skb->csum, + arg->csum)); skb->ip_summed = CHECKSUM_NONE; ip_push_pending_frames(sk); } diff -puN net/ipv4/ip_sockglue.c~git-net net/ipv4/ip_sockglue.c --- a/net/ipv4/ip_sockglue.c~git-net +++ a/net/ipv4/ip_sockglue.c @@ -59,7 +59,7 @@ static void ip_cmsg_recv_pktinfo(struct struct in_pktinfo info; struct rtable *rt = (struct rtable *)skb->dst; - info.ipi_addr.s_addr = skb->nh.iph->daddr; + info.ipi_addr.s_addr = ip_hdr(skb)->daddr; if (rt) { info.ipi_ifindex = rt->rt_iif; info.ipi_spec_dst.s_addr = rt->rt_spec_dst; @@ -73,13 +73,13 @@ static void ip_cmsg_recv_pktinfo(struct static void ip_cmsg_recv_ttl(struct msghdr *msg, struct sk_buff *skb) { - int ttl = skb->nh.iph->ttl; + int ttl = ip_hdr(skb)->ttl; put_cmsg(msg, SOL_IP, IP_TTL, sizeof(int), &ttl); } static void ip_cmsg_recv_tos(struct msghdr *msg, struct sk_buff *skb) { - put_cmsg(msg, SOL_IP, IP_TOS, 1, &skb->nh.iph->tos); + put_cmsg(msg, SOL_IP, IP_TOS, 1, &ip_hdr(skb)->tos); } static void ip_cmsg_recv_opts(struct msghdr *msg, struct sk_buff *skb) @@ -87,7 +87,8 @@ static void ip_cmsg_recv_opts(struct msg if (IPCB(skb)->opt.optlen == 0) return; - put_cmsg(msg, SOL_IP, IP_RECVOPTS, IPCB(skb)->opt.optlen, skb->nh.iph+1); + put_cmsg(msg, SOL_IP, IP_RECVOPTS, IPCB(skb)->opt.optlen, + ip_hdr(skb) + 1); } @@ -268,18 +269,21 @@ void ip_icmp_error(struct sock *sk, stru serr = SKB_EXT_ERR(skb); serr->ee.ee_errno = err; serr->ee.ee_origin = SO_EE_ORIGIN_ICMP; - serr->ee.ee_type = skb->h.icmph->type; - serr->ee.ee_code = skb->h.icmph->code; + serr->ee.ee_type = icmp_hdr(skb)->type; + serr->ee.ee_code = icmp_hdr(skb)->code; serr->ee.ee_pad = 0; serr->ee.ee_info = info; serr->ee.ee_data = 0; - serr->addr_offset = (u8*)&(((struct iphdr*)(skb->h.icmph+1))->daddr) - skb->nh.raw; + serr->addr_offset = (u8 *)&(((struct iphdr *)(icmp_hdr(skb) + 1))->daddr) - + skb_network_header(skb); serr->port = port; - skb->h.raw = payload; - if (!skb_pull(skb, payload - skb->data) || - sock_queue_err_skb(sk, skb)) - kfree_skb(skb); + if (skb_pull(skb, payload - skb->data) != NULL) { + skb_reset_transport_header(skb); + if (sock_queue_err_skb(sk, skb) == 0) + return; + } + kfree_skb(skb); } void ip_local_error(struct sock *sk, int err, __be32 daddr, __be16 port, u32 info) @@ -296,8 +300,9 @@ void ip_local_error(struct sock *sk, int if (!skb) return; - iph = (struct iphdr*)skb_put(skb, sizeof(struct iphdr)); - skb->nh.iph = iph; + skb_put(skb, sizeof(struct iphdr)); + skb_reset_network_header(skb); + iph = ip_hdr(skb); iph->daddr = daddr; serr = SKB_EXT_ERR(skb); @@ -308,11 +313,11 @@ void ip_local_error(struct sock *sk, int serr->ee.ee_pad = 0; serr->ee.ee_info = info; serr->ee.ee_data = 0; - serr->addr_offset = (u8*)&iph->daddr - skb->nh.raw; + serr->addr_offset = (u8 *)&iph->daddr - skb_network_header(skb); serr->port = port; - skb->h.raw = skb->tail; - __skb_pull(skb, skb->tail - skb->data); + __skb_pull(skb, skb_tail_pointer(skb) - skb->data); + skb_reset_transport_header(skb); if (sock_queue_err_skb(sk, skb)) kfree_skb(skb); @@ -354,7 +359,8 @@ int ip_recv_error(struct sock *sk, struc sin = (struct sockaddr_in *)msg->msg_name; if (sin) { sin->sin_family = AF_INET; - sin->sin_addr.s_addr = *(__be32*)(skb->nh.raw + serr->addr_offset); + sin->sin_addr.s_addr = *(__be32 *)(skb_network_header(skb) + + serr->addr_offset); sin->sin_port = serr->port; memset(&sin->sin_zero, 0, sizeof(sin->sin_zero)); } @@ -366,7 +372,7 @@ int ip_recv_error(struct sock *sk, struc struct inet_sock *inet = inet_sk(sk); sin->sin_family = AF_INET; - sin->sin_addr.s_addr = skb->nh.iph->saddr; + sin->sin_addr.s_addr = ip_hdr(skb)->saddr; sin->sin_port = 0; memset(&sin->sin_zero, 0, sizeof(sin->sin_zero)); if (inet->cmsg_flags) @@ -403,20 +409,20 @@ out: */ static int do_ip_setsockopt(struct sock *sk, int level, - int optname, char __user *optval, int optlen) + int optname, char __user *optval, int optlen) { struct inet_sock *inet = inet_sk(sk); int val=0,err; if (((1<= sizeof(int)) { if (get_user(val, (int __user *) optval)) return -EFAULT; @@ -440,444 +446,444 @@ static int do_ip_setsockopt(struct sock lock_sock(sk); switch (optname) { - case IP_OPTIONS: - { - struct ip_options * opt = NULL; - if (optlen > 40 || optlen < 0) - goto e_inval; - err = ip_options_get_from_user(&opt, optval, optlen); - if (err) - break; - if (inet->is_icsk) { - struct inet_connection_sock *icsk = inet_csk(sk); + case IP_OPTIONS: + { + struct ip_options * opt = NULL; + if (optlen > 40 || optlen < 0) + goto e_inval; + err = ip_options_get_from_user(&opt, optval, optlen); + if (err) + break; + if (inet->is_icsk) { + struct inet_connection_sock *icsk = inet_csk(sk); #if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE) - if (sk->sk_family == PF_INET || - (!((1 << sk->sk_state) & - (TCPF_LISTEN | TCPF_CLOSE)) && - inet->daddr != LOOPBACK4_IPV6)) { + if (sk->sk_family == PF_INET || + (!((1 << sk->sk_state) & + (TCPF_LISTEN | TCPF_CLOSE)) && + inet->daddr != LOOPBACK4_IPV6)) { #endif - if (inet->opt) - icsk->icsk_ext_hdr_len -= inet->opt->optlen; - if (opt) - icsk->icsk_ext_hdr_len += opt->optlen; - icsk->icsk_sync_mss(sk, icsk->icsk_pmtu_cookie); + if (inet->opt) + icsk->icsk_ext_hdr_len -= inet->opt->optlen; + if (opt) + icsk->icsk_ext_hdr_len += opt->optlen; + icsk->icsk_sync_mss(sk, icsk->icsk_pmtu_cookie); #if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE) - } -#endif } - opt = xchg(&inet->opt, opt); - kfree(opt); - break; +#endif } - case IP_PKTINFO: - if (val) - inet->cmsg_flags |= IP_CMSG_PKTINFO; - else - inet->cmsg_flags &= ~IP_CMSG_PKTINFO; - break; - case IP_RECVTTL: - if (val) - inet->cmsg_flags |= IP_CMSG_TTL; - else - inet->cmsg_flags &= ~IP_CMSG_TTL; - break; - case IP_RECVTOS: - if (val) - inet->cmsg_flags |= IP_CMSG_TOS; - else - inet->cmsg_flags &= ~IP_CMSG_TOS; - break; - case IP_RECVOPTS: - if (val) - inet->cmsg_flags |= IP_CMSG_RECVOPTS; - else - inet->cmsg_flags &= ~IP_CMSG_RECVOPTS; - break; - case IP_RETOPTS: - if (val) - inet->cmsg_flags |= IP_CMSG_RETOPTS; - else - inet->cmsg_flags &= ~IP_CMSG_RETOPTS; + opt = xchg(&inet->opt, opt); + kfree(opt); + break; + } + case IP_PKTINFO: + if (val) + inet->cmsg_flags |= IP_CMSG_PKTINFO; + else + inet->cmsg_flags &= ~IP_CMSG_PKTINFO; + break; + case IP_RECVTTL: + if (val) + inet->cmsg_flags |= IP_CMSG_TTL; + else + inet->cmsg_flags &= ~IP_CMSG_TTL; + break; + case IP_RECVTOS: + if (val) + inet->cmsg_flags |= IP_CMSG_TOS; + else + inet->cmsg_flags &= ~IP_CMSG_TOS; + break; + case IP_RECVOPTS: + if (val) + inet->cmsg_flags |= IP_CMSG_RECVOPTS; + else + inet->cmsg_flags &= ~IP_CMSG_RECVOPTS; + break; + case IP_RETOPTS: + if (val) + inet->cmsg_flags |= IP_CMSG_RETOPTS; + else + inet->cmsg_flags &= ~IP_CMSG_RETOPTS; + break; + case IP_PASSSEC: + if (val) + inet->cmsg_flags |= IP_CMSG_PASSSEC; + else + inet->cmsg_flags &= ~IP_CMSG_PASSSEC; + break; + case IP_TOS: /* This sets both TOS and Precedence */ + if (sk->sk_type == SOCK_STREAM) { + val &= ~3; + val |= inet->tos & 3; + } + if (IPTOS_PREC(val) >= IPTOS_PREC_CRITIC_ECP && + !capable(CAP_NET_ADMIN)) { + err = -EPERM; break; - case IP_PASSSEC: - if (val) - inet->cmsg_flags |= IP_CMSG_PASSSEC; - else - inet->cmsg_flags &= ~IP_CMSG_PASSSEC; + } + if (inet->tos != val) { + inet->tos = val; + sk->sk_priority = rt_tos2priority(val); + sk_dst_reset(sk); + } + break; + case IP_TTL: + if (optlen<1) + goto e_inval; + if (val != -1 && (val < 1 || val>255)) + goto e_inval; + inet->uc_ttl = val; + break; + case IP_HDRINCL: + if (sk->sk_type != SOCK_RAW) { + err = -ENOPROTOOPT; break; - case IP_TOS: /* This sets both TOS and Precedence */ - if (sk->sk_type == SOCK_STREAM) { - val &= ~3; - val |= inet->tos & 3; - } - if (IPTOS_PREC(val) >= IPTOS_PREC_CRITIC_ECP && - !capable(CAP_NET_ADMIN)) { - err = -EPERM; + } + inet->hdrincl = val ? 1 : 0; + break; + case IP_MTU_DISCOVER: + if (val<0 || val>3) + goto e_inval; + inet->pmtudisc = val; + break; + case IP_RECVERR: + inet->recverr = !!val; + if (!val) + skb_queue_purge(&sk->sk_error_queue); + break; + case IP_MULTICAST_TTL: + if (sk->sk_type == SOCK_STREAM) + goto e_inval; + if (optlen<1) + goto e_inval; + if (val==-1) + val = 1; + if (val < 0 || val > 255) + goto e_inval; + inet->mc_ttl = val; + break; + case IP_MULTICAST_LOOP: + if (optlen<1) + goto e_inval; + inet->mc_loop = !!val; + break; + case IP_MULTICAST_IF: + { + struct ip_mreqn mreq; + struct net_device *dev = NULL; + + if (sk->sk_type == SOCK_STREAM) + goto e_inval; + /* + * Check the arguments are allowable + */ + + err = -EFAULT; + if (optlen >= sizeof(struct ip_mreqn)) { + if (copy_from_user(&mreq,optval,sizeof(mreq))) break; - } - if (inet->tos != val) { - inet->tos = val; - sk->sk_priority = rt_tos2priority(val); - sk_dst_reset(sk); - } - break; - case IP_TTL: - if (optlen<1) - goto e_inval; - if (val != -1 && (val < 1 || val>255)) - goto e_inval; - inet->uc_ttl = val; - break; - case IP_HDRINCL: - if (sk->sk_type != SOCK_RAW) { - err = -ENOPROTOOPT; + } else { + memset(&mreq, 0, sizeof(mreq)); + if (optlen >= sizeof(struct in_addr) && + copy_from_user(&mreq.imr_address,optval,sizeof(struct in_addr))) + break; + } + + if (!mreq.imr_ifindex) { + if (mreq.imr_address.s_addr == INADDR_ANY) { + inet->mc_index = 0; + inet->mc_addr = 0; + err = 0; break; } - inet->hdrincl = val ? 1 : 0; - break; - case IP_MTU_DISCOVER: - if (val<0 || val>2) - goto e_inval; - inet->pmtudisc = val; - break; - case IP_RECVERR: - inet->recverr = !!val; - if (!val) - skb_queue_purge(&sk->sk_error_queue); - break; - case IP_MULTICAST_TTL: - if (sk->sk_type == SOCK_STREAM) - goto e_inval; - if (optlen<1) - goto e_inval; - if (val==-1) - val = 1; - if (val < 0 || val > 255) - goto e_inval; - inet->mc_ttl = val; - break; - case IP_MULTICAST_LOOP: - if (optlen<1) - goto e_inval; - inet->mc_loop = !!val; - break; - case IP_MULTICAST_IF: - { - struct ip_mreqn mreq; - struct net_device *dev = NULL; + dev = ip_dev_find(mreq.imr_address.s_addr); + if (dev) { + mreq.imr_ifindex = dev->ifindex; + dev_put(dev); + } + } else + dev = __dev_get_by_index(mreq.imr_ifindex); - if (sk->sk_type == SOCK_STREAM) - goto e_inval; - /* - * Check the arguments are allowable - */ - err = -EFAULT; - if (optlen >= sizeof(struct ip_mreqn)) { - if (copy_from_user(&mreq,optval,sizeof(mreq))) - break; - } else { - memset(&mreq, 0, sizeof(mreq)); - if (optlen >= sizeof(struct in_addr) && - copy_from_user(&mreq.imr_address,optval,sizeof(struct in_addr))) - break; - } + err = -EADDRNOTAVAIL; + if (!dev) + break; - if (!mreq.imr_ifindex) { - if (mreq.imr_address.s_addr == INADDR_ANY) { - inet->mc_index = 0; - inet->mc_addr = 0; - err = 0; - break; - } - dev = ip_dev_find(mreq.imr_address.s_addr); - if (dev) { - mreq.imr_ifindex = dev->ifindex; - dev_put(dev); - } - } else - dev = __dev_get_by_index(mreq.imr_ifindex); + err = -EINVAL; + if (sk->sk_bound_dev_if && + mreq.imr_ifindex != sk->sk_bound_dev_if) + break; + inet->mc_index = mreq.imr_ifindex; + inet->mc_addr = mreq.imr_address.s_addr; + err = 0; + break; + } - err = -EADDRNOTAVAIL; - if (!dev) - break; + case IP_ADD_MEMBERSHIP: + case IP_DROP_MEMBERSHIP: + { + struct ip_mreqn mreq; - err = -EINVAL; - if (sk->sk_bound_dev_if && - mreq.imr_ifindex != sk->sk_bound_dev_if) + if (optlen < sizeof(struct ip_mreq)) + goto e_inval; + err = -EFAULT; + if (optlen >= sizeof(struct ip_mreqn)) { + if (copy_from_user(&mreq,optval,sizeof(mreq))) break; + } else { + memset(&mreq, 0, sizeof(mreq)); + if (copy_from_user(&mreq,optval,sizeof(struct ip_mreq))) + break; + } - inet->mc_index = mreq.imr_ifindex; - inet->mc_addr = mreq.imr_address.s_addr; - err = 0; + if (optname == IP_ADD_MEMBERSHIP) + err = ip_mc_join_group(sk, &mreq); + else + err = ip_mc_leave_group(sk, &mreq); + break; + } + case IP_MSFILTER: + { + extern int sysctl_igmp_max_msf; + struct ip_msfilter *msf; + + if (optlen < IP_MSFILTER_SIZE(0)) + goto e_inval; + if (optlen > sysctl_optmem_max) { + err = -ENOBUFS; break; } + msf = kmalloc(optlen, GFP_KERNEL); + if (msf == 0) { + err = -ENOBUFS; + break; + } + err = -EFAULT; + if (copy_from_user(msf, optval, optlen)) { + kfree(msf); + break; + } + /* numsrc >= (1G-4) overflow in 32 bits */ + if (msf->imsf_numsrc >= 0x3ffffffcU || + msf->imsf_numsrc > sysctl_igmp_max_msf) { + kfree(msf); + err = -ENOBUFS; + break; + } + if (IP_MSFILTER_SIZE(msf->imsf_numsrc) > optlen) { + kfree(msf); + err = -EINVAL; + break; + } + err = ip_mc_msfilter(sk, msf, 0); + kfree(msf); + break; + } + case IP_BLOCK_SOURCE: + case IP_UNBLOCK_SOURCE: + case IP_ADD_SOURCE_MEMBERSHIP: + case IP_DROP_SOURCE_MEMBERSHIP: + { + struct ip_mreq_source mreqs; + int omode, add; - case IP_ADD_MEMBERSHIP: - case IP_DROP_MEMBERSHIP: - { - struct ip_mreqn mreq; - - if (optlen < sizeof(struct ip_mreq)) - goto e_inval; + if (optlen != sizeof(struct ip_mreq_source)) + goto e_inval; + if (copy_from_user(&mreqs, optval, sizeof(mreqs))) { err = -EFAULT; - if (optlen >= sizeof(struct ip_mreqn)) { - if(copy_from_user(&mreq,optval,sizeof(mreq))) - break; - } else { - memset(&mreq, 0, sizeof(mreq)); - if (copy_from_user(&mreq,optval,sizeof(struct ip_mreq))) - break; - } - - if (optname == IP_ADD_MEMBERSHIP) - err = ip_mc_join_group(sk, &mreq); - else - err = ip_mc_leave_group(sk, &mreq); break; } - case IP_MSFILTER: - { - extern int sysctl_igmp_max_msf; - struct ip_msfilter *msf; + if (optname == IP_BLOCK_SOURCE) { + omode = MCAST_EXCLUDE; + add = 1; + } else if (optname == IP_UNBLOCK_SOURCE) { + omode = MCAST_EXCLUDE; + add = 0; + } else if (optname == IP_ADD_SOURCE_MEMBERSHIP) { + struct ip_mreqn mreq; - if (optlen < IP_MSFILTER_SIZE(0)) - goto e_inval; - if (optlen > sysctl_optmem_max) { - err = -ENOBUFS; - break; - } - msf = kmalloc(optlen, GFP_KERNEL); - if (msf == 0) { - err = -ENOBUFS; + mreq.imr_multiaddr.s_addr = mreqs.imr_multiaddr; + mreq.imr_address.s_addr = mreqs.imr_interface; + mreq.imr_ifindex = 0; + err = ip_mc_join_group(sk, &mreq); + if (err && err != -EADDRINUSE) break; - } + omode = MCAST_INCLUDE; + add = 1; + } else /* IP_DROP_SOURCE_MEMBERSHIP */ { + omode = MCAST_INCLUDE; + add = 0; + } + err = ip_mc_source(add, omode, sk, &mreqs, 0); + break; + } + case MCAST_JOIN_GROUP: + case MCAST_LEAVE_GROUP: + { + struct group_req greq; + struct sockaddr_in *psin; + struct ip_mreqn mreq; + + if (optlen < sizeof(struct group_req)) + goto e_inval; + err = -EFAULT; + if (copy_from_user(&greq, optval, sizeof(greq))) + break; + psin = (struct sockaddr_in *)&greq.gr_group; + if (psin->sin_family != AF_INET) + goto e_inval; + memset(&mreq, 0, sizeof(mreq)); + mreq.imr_multiaddr = psin->sin_addr; + mreq.imr_ifindex = greq.gr_interface; + + if (optname == MCAST_JOIN_GROUP) + err = ip_mc_join_group(sk, &mreq); + else + err = ip_mc_leave_group(sk, &mreq); + break; + } + case MCAST_JOIN_SOURCE_GROUP: + case MCAST_LEAVE_SOURCE_GROUP: + case MCAST_BLOCK_SOURCE: + case MCAST_UNBLOCK_SOURCE: + { + struct group_source_req greqs; + struct ip_mreq_source mreqs; + struct sockaddr_in *psin; + int omode, add; + + if (optlen != sizeof(struct group_source_req)) + goto e_inval; + if (copy_from_user(&greqs, optval, sizeof(greqs))) { err = -EFAULT; - if (copy_from_user(msf, optval, optlen)) { - kfree(msf); - break; - } - /* numsrc >= (1G-4) overflow in 32 bits */ - if (msf->imsf_numsrc >= 0x3ffffffcU || - msf->imsf_numsrc > sysctl_igmp_max_msf) { - kfree(msf); - err = -ENOBUFS; - break; - } - if (IP_MSFILTER_SIZE(msf->imsf_numsrc) > optlen) { - kfree(msf); - err = -EINVAL; - break; - } - err = ip_mc_msfilter(sk, msf, 0); - kfree(msf); break; } - case IP_BLOCK_SOURCE: - case IP_UNBLOCK_SOURCE: - case IP_ADD_SOURCE_MEMBERSHIP: - case IP_DROP_SOURCE_MEMBERSHIP: - { - struct ip_mreq_source mreqs; - int omode, add; - - if (optlen != sizeof(struct ip_mreq_source)) - goto e_inval; - if (copy_from_user(&mreqs, optval, sizeof(mreqs))) { - err = -EFAULT; - break; - } - if (optname == IP_BLOCK_SOURCE) { - omode = MCAST_EXCLUDE; - add = 1; - } else if (optname == IP_UNBLOCK_SOURCE) { - omode = MCAST_EXCLUDE; - add = 0; - } else if (optname == IP_ADD_SOURCE_MEMBERSHIP) { - struct ip_mreqn mreq; - - mreq.imr_multiaddr.s_addr = mreqs.imr_multiaddr; - mreq.imr_address.s_addr = mreqs.imr_interface; - mreq.imr_ifindex = 0; - err = ip_mc_join_group(sk, &mreq); - if (err && err != -EADDRINUSE) - break; - omode = MCAST_INCLUDE; - add = 1; - } else /* IP_DROP_SOURCE_MEMBERSHIP */ { - omode = MCAST_INCLUDE; - add = 0; - } - err = ip_mc_source(add, omode, sk, &mreqs, 0); + if (greqs.gsr_group.ss_family != AF_INET || + greqs.gsr_source.ss_family != AF_INET) { + err = -EADDRNOTAVAIL; break; } - case MCAST_JOIN_GROUP: - case MCAST_LEAVE_GROUP: - { - struct group_req greq; - struct sockaddr_in *psin; + psin = (struct sockaddr_in *)&greqs.gsr_group; + mreqs.imr_multiaddr = psin->sin_addr.s_addr; + psin = (struct sockaddr_in *)&greqs.gsr_source; + mreqs.imr_sourceaddr = psin->sin_addr.s_addr; + mreqs.imr_interface = 0; /* use index for mc_source */ + + if (optname == MCAST_BLOCK_SOURCE) { + omode = MCAST_EXCLUDE; + add = 1; + } else if (optname == MCAST_UNBLOCK_SOURCE) { + omode = MCAST_EXCLUDE; + add = 0; + } else if (optname == MCAST_JOIN_SOURCE_GROUP) { struct ip_mreqn mreq; - if (optlen < sizeof(struct group_req)) - goto e_inval; - err = -EFAULT; - if(copy_from_user(&greq, optval, sizeof(greq))) - break; - psin = (struct sockaddr_in *)&greq.gr_group; - if (psin->sin_family != AF_INET) - goto e_inval; - memset(&mreq, 0, sizeof(mreq)); + psin = (struct sockaddr_in *)&greqs.gsr_group; mreq.imr_multiaddr = psin->sin_addr; - mreq.imr_ifindex = greq.gr_interface; - - if (optname == MCAST_JOIN_GROUP) - err = ip_mc_join_group(sk, &mreq); - else - err = ip_mc_leave_group(sk, &mreq); + mreq.imr_address.s_addr = 0; + mreq.imr_ifindex = greqs.gsr_interface; + err = ip_mc_join_group(sk, &mreq); + if (err && err != -EADDRINUSE) + break; + greqs.gsr_interface = mreq.imr_ifindex; + omode = MCAST_INCLUDE; + add = 1; + } else /* MCAST_LEAVE_SOURCE_GROUP */ { + omode = MCAST_INCLUDE; + add = 0; + } + err = ip_mc_source(add, omode, sk, &mreqs, + greqs.gsr_interface); + break; + } + case MCAST_MSFILTER: + { + extern int sysctl_igmp_max_msf; + struct sockaddr_in *psin; + struct ip_msfilter *msf = NULL; + struct group_filter *gsf = NULL; + int msize, i, ifindex; + + if (optlen < GROUP_FILTER_SIZE(0)) + goto e_inval; + if (optlen > sysctl_optmem_max) { + err = -ENOBUFS; break; } - case MCAST_JOIN_SOURCE_GROUP: - case MCAST_LEAVE_SOURCE_GROUP: - case MCAST_BLOCK_SOURCE: - case MCAST_UNBLOCK_SOURCE: - { - struct group_source_req greqs; - struct ip_mreq_source mreqs; - struct sockaddr_in *psin; - int omode, add; - - if (optlen != sizeof(struct group_source_req)) - goto e_inval; - if (copy_from_user(&greqs, optval, sizeof(greqs))) { - err = -EFAULT; - break; - } - if (greqs.gsr_group.ss_family != AF_INET || - greqs.gsr_source.ss_family != AF_INET) { - err = -EADDRNOTAVAIL; - break; - } - psin = (struct sockaddr_in *)&greqs.gsr_group; - mreqs.imr_multiaddr = psin->sin_addr.s_addr; - psin = (struct sockaddr_in *)&greqs.gsr_source; - mreqs.imr_sourceaddr = psin->sin_addr.s_addr; - mreqs.imr_interface = 0; /* use index for mc_source */ - - if (optname == MCAST_BLOCK_SOURCE) { - omode = MCAST_EXCLUDE; - add = 1; - } else if (optname == MCAST_UNBLOCK_SOURCE) { - omode = MCAST_EXCLUDE; - add = 0; - } else if (optname == MCAST_JOIN_SOURCE_GROUP) { - struct ip_mreqn mreq; - - psin = (struct sockaddr_in *)&greqs.gsr_group; - mreq.imr_multiaddr = psin->sin_addr; - mreq.imr_address.s_addr = 0; - mreq.imr_ifindex = greqs.gsr_interface; - err = ip_mc_join_group(sk, &mreq); - if (err && err != -EADDRINUSE) - break; - greqs.gsr_interface = mreq.imr_ifindex; - omode = MCAST_INCLUDE; - add = 1; - } else /* MCAST_LEAVE_SOURCE_GROUP */ { - omode = MCAST_INCLUDE; - add = 0; - } - err = ip_mc_source(add, omode, sk, &mreqs, - greqs.gsr_interface); + gsf = kmalloc(optlen,GFP_KERNEL); + if (gsf == 0) { + err = -ENOBUFS; break; } - case MCAST_MSFILTER: - { - extern int sysctl_igmp_max_msf; - struct sockaddr_in *psin; - struct ip_msfilter *msf = NULL; - struct group_filter *gsf = NULL; - int msize, i, ifindex; - - if (optlen < GROUP_FILTER_SIZE(0)) - goto e_inval; - if (optlen > sysctl_optmem_max) { - err = -ENOBUFS; - break; - } - gsf = kmalloc(optlen,GFP_KERNEL); - if (gsf == 0) { - err = -ENOBUFS; - break; - } - err = -EFAULT; - if (copy_from_user(gsf, optval, optlen)) { - goto mc_msf_out; - } - /* numsrc >= (4G-140)/128 overflow in 32 bits */ - if (gsf->gf_numsrc >= 0x1ffffff || - gsf->gf_numsrc > sysctl_igmp_max_msf) { - err = -ENOBUFS; - goto mc_msf_out; - } - if (GROUP_FILTER_SIZE(gsf->gf_numsrc) > optlen) { - err = -EINVAL; - goto mc_msf_out; - } - msize = IP_MSFILTER_SIZE(gsf->gf_numsrc); - msf = kmalloc(msize,GFP_KERNEL); - if (msf == 0) { - err = -ENOBUFS; - goto mc_msf_out; - } - ifindex = gsf->gf_interface; - psin = (struct sockaddr_in *)&gsf->gf_group; - if (psin->sin_family != AF_INET) { - err = -EADDRNOTAVAIL; - goto mc_msf_out; - } - msf->imsf_multiaddr = psin->sin_addr.s_addr; - msf->imsf_interface = 0; - msf->imsf_fmode = gsf->gf_fmode; - msf->imsf_numsrc = gsf->gf_numsrc; + err = -EFAULT; + if (copy_from_user(gsf, optval, optlen)) { + goto mc_msf_out; + } + /* numsrc >= (4G-140)/128 overflow in 32 bits */ + if (gsf->gf_numsrc >= 0x1ffffff || + gsf->gf_numsrc > sysctl_igmp_max_msf) { + err = -ENOBUFS; + goto mc_msf_out; + } + if (GROUP_FILTER_SIZE(gsf->gf_numsrc) > optlen) { + err = -EINVAL; + goto mc_msf_out; + } + msize = IP_MSFILTER_SIZE(gsf->gf_numsrc); + msf = kmalloc(msize,GFP_KERNEL); + if (msf == 0) { + err = -ENOBUFS; + goto mc_msf_out; + } + ifindex = gsf->gf_interface; + psin = (struct sockaddr_in *)&gsf->gf_group; + if (psin->sin_family != AF_INET) { err = -EADDRNOTAVAIL; - for (i=0; igf_numsrc; ++i) { - psin = (struct sockaddr_in *)&gsf->gf_slist[i]; - - if (psin->sin_family != AF_INET) - goto mc_msf_out; - msf->imsf_slist[i] = psin->sin_addr.s_addr; - } - kfree(gsf); - gsf = NULL; - - err = ip_mc_msfilter(sk, msf, ifindex); -mc_msf_out: - kfree(msf); - kfree(gsf); - break; + goto mc_msf_out; } - case IP_ROUTER_ALERT: - err = ip_ra_control(sk, val ? 1 : 0, NULL); - break; + msf->imsf_multiaddr = psin->sin_addr.s_addr; + msf->imsf_interface = 0; + msf->imsf_fmode = gsf->gf_fmode; + msf->imsf_numsrc = gsf->gf_numsrc; + err = -EADDRNOTAVAIL; + for (i=0; igf_numsrc; ++i) { + psin = (struct sockaddr_in *)&gsf->gf_slist[i]; - case IP_FREEBIND: - if (optlen<1) - goto e_inval; - inet->freebind = !!val; - break; - - case IP_IPSEC_POLICY: - case IP_XFRM_POLICY: - err = -EPERM; - if (!capable(CAP_NET_ADMIN)) - break; - err = xfrm_user_policy(sk, optname, optval, optlen); - break; + if (psin->sin_family != AF_INET) + goto mc_msf_out; + msf->imsf_slist[i] = psin->sin_addr.s_addr; + } + kfree(gsf); + gsf = NULL; - default: - err = -ENOPROTOOPT; - break; + err = ip_mc_msfilter(sk, msf, ifindex); + mc_msf_out: + kfree(msf); + kfree(gsf); + break; + } + case IP_ROUTER_ALERT: + err = ip_ra_control(sk, val ? 1 : 0, NULL); + break; + + case IP_FREEBIND: + if (optlen<1) + goto e_inval; + inet->freebind = !!val; + break; + + case IP_IPSEC_POLICY: + case IP_XFRM_POLICY: + err = -EPERM; + if (!capable(CAP_NET_ADMIN)) + break; + err = xfrm_user_policy(sk, optname, optval, optlen); + break; + + default: + err = -ENOPROTOOPT; + break; } release_sock(sk); return err; @@ -948,214 +954,213 @@ EXPORT_SYMBOL(compat_ip_setsockopt); */ static int do_ip_getsockopt(struct sock *sk, int level, int optname, - char __user *optval, int __user *optlen) + char __user *optval, int __user *optlen) { struct inet_sock *inet = inet_sk(sk); int val; int len; - if(level!=SOL_IP) + if (level != SOL_IP) return -EOPNOTSUPP; #ifdef CONFIG_IP_MROUTE - if(optname>=MRT_BASE && optname <=MRT_BASE+10) - { + if (optname >= MRT_BASE && optname <= MRT_BASE+10) { return ip_mroute_getsockopt(sk,optname,optval,optlen); } #endif - if(get_user(len,optlen)) + if (get_user(len,optlen)) return -EFAULT; - if(len < 0) + if (len < 0) return -EINVAL; lock_sock(sk); - switch(optname) { - case IP_OPTIONS: - { - unsigned char optbuf[sizeof(struct ip_options)+40]; - struct ip_options * opt = (struct ip_options*)optbuf; - opt->optlen = 0; - if (inet->opt) - memcpy(optbuf, inet->opt, - sizeof(struct ip_options)+ - inet->opt->optlen); - release_sock(sk); - - if (opt->optlen == 0) - return put_user(0, optlen); - - ip_options_undo(opt); - - len = min_t(unsigned int, len, opt->optlen); - if(put_user(len, optlen)) - return -EFAULT; - if(copy_to_user(optval, opt->__data, len)) - return -EFAULT; - return 0; - } - case IP_PKTINFO: - val = (inet->cmsg_flags & IP_CMSG_PKTINFO) != 0; - break; - case IP_RECVTTL: - val = (inet->cmsg_flags & IP_CMSG_TTL) != 0; - break; - case IP_RECVTOS: - val = (inet->cmsg_flags & IP_CMSG_TOS) != 0; - break; - case IP_RECVOPTS: - val = (inet->cmsg_flags & IP_CMSG_RECVOPTS) != 0; - break; - case IP_RETOPTS: - val = (inet->cmsg_flags & IP_CMSG_RETOPTS) != 0; - break; - case IP_PASSSEC: - val = (inet->cmsg_flags & IP_CMSG_PASSSEC) != 0; - break; - case IP_TOS: - val = inet->tos; - break; - case IP_TTL: - val = (inet->uc_ttl == -1 ? - sysctl_ip_default_ttl : - inet->uc_ttl); - break; - case IP_HDRINCL: - val = inet->hdrincl; - break; - case IP_MTU_DISCOVER: - val = inet->pmtudisc; - break; - case IP_MTU: - { - struct dst_entry *dst; - val = 0; - dst = sk_dst_get(sk); - if (dst) { - val = dst_mtu(dst); - dst_release(dst); - } - if (!val) { - release_sock(sk); - return -ENOTCONN; - } - break; + switch (optname) { + case IP_OPTIONS: + { + unsigned char optbuf[sizeof(struct ip_options)+40]; + struct ip_options * opt = (struct ip_options*)optbuf; + opt->optlen = 0; + if (inet->opt) + memcpy(optbuf, inet->opt, + sizeof(struct ip_options)+ + inet->opt->optlen); + release_sock(sk); + + if (opt->optlen == 0) + return put_user(0, optlen); + + ip_options_undo(opt); + + len = min_t(unsigned int, len, opt->optlen); + if (put_user(len, optlen)) + return -EFAULT; + if (copy_to_user(optval, opt->__data, len)) + return -EFAULT; + return 0; + } + case IP_PKTINFO: + val = (inet->cmsg_flags & IP_CMSG_PKTINFO) != 0; + break; + case IP_RECVTTL: + val = (inet->cmsg_flags & IP_CMSG_TTL) != 0; + break; + case IP_RECVTOS: + val = (inet->cmsg_flags & IP_CMSG_TOS) != 0; + break; + case IP_RECVOPTS: + val = (inet->cmsg_flags & IP_CMSG_RECVOPTS) != 0; + break; + case IP_RETOPTS: + val = (inet->cmsg_flags & IP_CMSG_RETOPTS) != 0; + break; + case IP_PASSSEC: + val = (inet->cmsg_flags & IP_CMSG_PASSSEC) != 0; + break; + case IP_TOS: + val = inet->tos; + break; + case IP_TTL: + val = (inet->uc_ttl == -1 ? + sysctl_ip_default_ttl : + inet->uc_ttl); + break; + case IP_HDRINCL: + val = inet->hdrincl; + break; + case IP_MTU_DISCOVER: + val = inet->pmtudisc; + break; + case IP_MTU: + { + struct dst_entry *dst; + val = 0; + dst = sk_dst_get(sk); + if (dst) { + val = dst_mtu(dst); + dst_release(dst); } - case IP_RECVERR: - val = inet->recverr; - break; - case IP_MULTICAST_TTL: - val = inet->mc_ttl; - break; - case IP_MULTICAST_LOOP: - val = inet->mc_loop; - break; - case IP_MULTICAST_IF: - { - struct in_addr addr; - len = min_t(unsigned int, len, sizeof(struct in_addr)); - addr.s_addr = inet->mc_addr; + if (!val) { release_sock(sk); - - if(put_user(len, optlen)) - return -EFAULT; - if(copy_to_user(optval, &addr, len)) - return -EFAULT; - return 0; + return -ENOTCONN; } - case IP_MSFILTER: - { - struct ip_msfilter msf; - int err; + break; + } + case IP_RECVERR: + val = inet->recverr; + break; + case IP_MULTICAST_TTL: + val = inet->mc_ttl; + break; + case IP_MULTICAST_LOOP: + val = inet->mc_loop; + break; + case IP_MULTICAST_IF: + { + struct in_addr addr; + len = min_t(unsigned int, len, sizeof(struct in_addr)); + addr.s_addr = inet->mc_addr; + release_sock(sk); - if (len < IP_MSFILTER_SIZE(0)) { - release_sock(sk); - return -EINVAL; - } - if (copy_from_user(&msf, optval, IP_MSFILTER_SIZE(0))) { - release_sock(sk); - return -EFAULT; - } - err = ip_mc_msfget(sk, &msf, - (struct ip_msfilter __user *)optval, optlen); + if (put_user(len, optlen)) + return -EFAULT; + if (copy_to_user(optval, &addr, len)) + return -EFAULT; + return 0; + } + case IP_MSFILTER: + { + struct ip_msfilter msf; + int err; + + if (len < IP_MSFILTER_SIZE(0)) { release_sock(sk); - return err; + return -EINVAL; } - case MCAST_MSFILTER: - { - struct group_filter gsf; - int err; - - if (len < GROUP_FILTER_SIZE(0)) { - release_sock(sk); - return -EINVAL; - } - if (copy_from_user(&gsf, optval, GROUP_FILTER_SIZE(0))) { - release_sock(sk); - return -EFAULT; - } - err = ip_mc_gsfget(sk, &gsf, - (struct group_filter __user *)optval, optlen); + if (copy_from_user(&msf, optval, IP_MSFILTER_SIZE(0))) { release_sock(sk); - return err; + return -EFAULT; } - case IP_PKTOPTIONS: - { - struct msghdr msg; + err = ip_mc_msfget(sk, &msf, + (struct ip_msfilter __user *)optval, optlen); + release_sock(sk); + return err; + } + case MCAST_MSFILTER: + { + struct group_filter gsf; + int err; + if (len < GROUP_FILTER_SIZE(0)) { release_sock(sk); - - if (sk->sk_type != SOCK_STREAM) - return -ENOPROTOOPT; - - msg.msg_control = optval; - msg.msg_controllen = len; - msg.msg_flags = 0; - - if (inet->cmsg_flags & IP_CMSG_PKTINFO) { - struct in_pktinfo info; - - info.ipi_addr.s_addr = inet->rcv_saddr; - info.ipi_spec_dst.s_addr = inet->rcv_saddr; - info.ipi_ifindex = inet->mc_index; - put_cmsg(&msg, SOL_IP, IP_PKTINFO, sizeof(info), &info); - } - if (inet->cmsg_flags & IP_CMSG_TTL) { - int hlim = inet->mc_ttl; - put_cmsg(&msg, SOL_IP, IP_TTL, sizeof(hlim), &hlim); - } - len -= msg.msg_controllen; - return put_user(len, optlen); + return -EINVAL; } - case IP_FREEBIND: - val = inet->freebind; - break; - default: + if (copy_from_user(&gsf, optval, GROUP_FILTER_SIZE(0))) { release_sock(sk); + return -EFAULT; + } + err = ip_mc_gsfget(sk, &gsf, + (struct group_filter __user *)optval, optlen); + release_sock(sk); + return err; + } + case IP_PKTOPTIONS: + { + struct msghdr msg; + + release_sock(sk); + + if (sk->sk_type != SOCK_STREAM) return -ENOPROTOOPT; + + msg.msg_control = optval; + msg.msg_controllen = len; + msg.msg_flags = 0; + + if (inet->cmsg_flags & IP_CMSG_PKTINFO) { + struct in_pktinfo info; + + info.ipi_addr.s_addr = inet->rcv_saddr; + info.ipi_spec_dst.s_addr = inet->rcv_saddr; + info.ipi_ifindex = inet->mc_index; + put_cmsg(&msg, SOL_IP, IP_PKTINFO, sizeof(info), &info); + } + if (inet->cmsg_flags & IP_CMSG_TTL) { + int hlim = inet->mc_ttl; + put_cmsg(&msg, SOL_IP, IP_TTL, sizeof(hlim), &hlim); + } + len -= msg.msg_controllen; + return put_user(len, optlen); + } + case IP_FREEBIND: + val = inet->freebind; + break; + default: + release_sock(sk); + return -ENOPROTOOPT; } release_sock(sk); if (len < sizeof(int) && len > 0 && val>=0 && val<255) { unsigned char ucval = (unsigned char)val; len = 1; - if(put_user(len, optlen)) + if (put_user(len, optlen)) return -EFAULT; - if(copy_to_user(optval,&ucval,1)) + if (copy_to_user(optval,&ucval,1)) return -EFAULT; } else { len = min_t(unsigned int, sizeof(int), len); - if(put_user(len, optlen)) + if (put_user(len, optlen)) return -EFAULT; - if(copy_to_user(optval,&val,len)) + if (copy_to_user(optval,&val,len)) return -EFAULT; } return 0; } int ip_getsockopt(struct sock *sk, int level, - int optname, char __user *optval, int __user *optlen) + int optname, char __user *optval, int __user *optlen) { int err; @@ -1169,7 +1174,7 @@ int ip_getsockopt(struct sock *sk, int l ) { int len; - if(get_user(len,optlen)) + if (get_user(len,optlen)) return -EFAULT; lock_sock(sk); diff -puN net/ipv4/ipcomp.c~git-net net/ipv4/ipcomp.c --- a/net/ipv4/ipcomp.c~git-net +++ a/net/ipv4/ipcomp.c @@ -43,21 +43,15 @@ static LIST_HEAD(ipcomp_tfms_list); static int ipcomp_decompress(struct xfrm_state *x, struct sk_buff *skb) { - int err, plen, dlen; struct ipcomp_data *ipcd = x->data; - u8 *start, *scratch; - struct crypto_comp *tfm; - int cpu; - - plen = skb->len; - dlen = IPCOMP_SCRATCH_SIZE; - start = skb->data; - - cpu = get_cpu(); - scratch = *per_cpu_ptr(ipcomp_scratches, cpu); - tfm = *per_cpu_ptr(ipcd->tfms, cpu); + const int plen = skb->len; + int dlen = IPCOMP_SCRATCH_SIZE; + const u8 *start = skb->data; + const int cpu = get_cpu(); + u8 *scratch = *per_cpu_ptr(ipcomp_scratches, cpu); + struct crypto_comp *tfm = *per_cpu_ptr(ipcd->tfms, cpu); + int err = crypto_comp_decompress(tfm, start, plen, scratch, &dlen); - err = crypto_comp_decompress(tfm, start, plen, scratch, &dlen); if (err) goto out; @@ -72,7 +66,7 @@ static int ipcomp_decompress(struct xfrm skb->truesize += dlen - plen; __skb_put(skb, dlen - plen); - memcpy(skb->data, scratch, dlen); + skb_copy_to_linear_data(skb, scratch, dlen); out: put_cpu(); return err; @@ -90,10 +84,10 @@ static int ipcomp_input(struct xfrm_stat skb->ip_summed = CHECKSUM_NONE; /* Remove ipcomp header and decompress original payload */ - iph = skb->nh.iph; + iph = ip_hdr(skb); ipch = (void *)skb->data; iph->protocol = ipch->nexthdr; - skb->h.raw = skb->nh.raw + sizeof(*ipch); + skb->transport_header = skb->network_header + sizeof(*ipch); __skb_pull(skb, sizeof(*ipch)); err = ipcomp_decompress(x, skb); @@ -103,23 +97,16 @@ out: static int ipcomp_compress(struct xfrm_state *x, struct sk_buff *skb) { - int err, plen, dlen, ihlen; - struct iphdr *iph = skb->nh.iph; struct ipcomp_data *ipcd = x->data; - u8 *start, *scratch; - struct crypto_comp *tfm; - int cpu; + const int ihlen = ip_hdrlen(skb); + const int plen = skb->len - ihlen; + int dlen = IPCOMP_SCRATCH_SIZE; + u8 *start = skb->data + ihlen; + const int cpu = get_cpu(); + u8 *scratch = *per_cpu_ptr(ipcomp_scratches, cpu); + struct crypto_comp *tfm = *per_cpu_ptr(ipcd->tfms, cpu); + int err = crypto_comp_compress(tfm, start, plen, scratch, &dlen); - ihlen = iph->ihl * 4; - plen = skb->len - ihlen; - dlen = IPCOMP_SCRATCH_SIZE; - start = skb->data + ihlen; - - cpu = get_cpu(); - scratch = *per_cpu_ptr(ipcomp_scratches, cpu); - tfm = *per_cpu_ptr(ipcd->tfms, cpu); - - err = crypto_comp_compress(tfm, start, plen, scratch, &dlen); if (err) goto out; @@ -142,12 +129,11 @@ out: static int ipcomp_output(struct xfrm_state *x, struct sk_buff *skb) { int err; - struct iphdr *iph; struct ip_comp_hdr *ipch; struct ipcomp_data *ipcd = x->data; int hdr_len = 0; + struct iphdr *iph = ip_hdr(skb); - iph = skb->nh.iph; iph->tot_len = htons(skb->len); hdr_len = iph->ihl * 4; if ((skb->len - hdr_len) < ipcd->threshold) { @@ -159,7 +145,7 @@ static int ipcomp_output(struct xfrm_sta goto out_ok; err = ipcomp_compress(x, skb); - iph = skb->nh.iph; + iph = ip_hdr(skb); if (err) { goto out_ok; @@ -188,8 +174,8 @@ static void ipcomp4_err(struct sk_buff * struct ip_comp_hdr *ipch = (struct ip_comp_hdr *)(skb->data+(iph->ihl<<2)); struct xfrm_state *x; - if (skb->h.icmph->type != ICMP_DEST_UNREACH || - skb->h.icmph->code != ICMP_FRAG_NEEDED) + if (icmp_hdr(skb)->type != ICMP_DEST_UNREACH || + icmp_hdr(skb)->code != ICMP_FRAG_NEEDED) return; spi = htonl(ntohs(ipch->cpi)); diff -puN net/ipv4/ipconfig.c~git-net net/ipv4/ipconfig.c --- a/net/ipv4/ipconfig.c~git-net +++ a/net/ipv4/ipconfig.c @@ -432,7 +432,7 @@ ic_rarp_recv(struct sk_buff *skb, struct goto drop; /* Basic sanity checks can be done without the lock. */ - rarp = (struct arphdr *)skb->h.raw; + rarp = (struct arphdr *)skb_transport_header(skb); /* If this test doesn't pass, it's not IP, or we should * ignore it anyway. @@ -455,7 +455,7 @@ ic_rarp_recv(struct sk_buff *skb, struct goto drop; /* OK, it is all there and looks valid, process... */ - rarp = (struct arphdr *)skb->h.raw; + rarp = (struct arphdr *)skb_transport_header(skb); rarp_ptr = (unsigned char *) (rarp + 1); /* One reply at a time, please. */ @@ -702,7 +702,8 @@ static void __init ic_bootp_send_if(stru memset(b, 0, sizeof(struct bootp_pkt)); /* Construct IP header */ - skb->nh.iph = h = &b->iph; + skb_reset_network_header(skb); + h = ip_hdr(skb); h->version = 4; h->ihl = 5; h->tot_len = htons(sizeof(struct bootp_pkt)); @@ -782,7 +783,7 @@ static void __init ic_do_bootp_ext(u8 *e u8 *c; printk("DHCP/BOOTP: Got extension %d:",*ext); - for(c=ext+2; cnh.iph; + b = (struct bootp_pkt *)skb_network_header(skb); h = &b->iph; if (h->ihl != 5 || h->version != 4 || h->protocol != IPPROTO_UDP) @@ -883,7 +884,7 @@ static int __init ic_bootp_recv(struct s if (!pskb_may_pull(skb, skb->len)) goto drop; - b = (struct bootp_pkt *) skb->nh.iph; + b = (struct bootp_pkt *)skb_network_header(skb); h = &b->iph; /* One reply at a time, please. */ @@ -938,7 +939,7 @@ static int __init ic_bootp_recv(struct s if (opt[1] >= 4) memcpy(&server_id, opt + 2, 4); break; - }; + } } #ifdef IPCONFIG_DEBUG @@ -983,7 +984,7 @@ static int __init ic_bootp_recv(struct s ic_myaddr = NONE; ic_servaddr = NONE; goto drop_unlock; - }; + } ic_dhcp_msgtype = mt; @@ -1094,7 +1095,7 @@ static int __init ic_dynamic(void) retries = CONF_SEND_RETRIES; get_random_bytes(&timeout, sizeof(timeout)); timeout = CONF_BASE_TIMEOUT + (timeout % (unsigned) CONF_TIMEOUT_RANDOM); - for(;;) { + for (;;) { #ifdef IPCONFIG_BOOTP if (do_bootp && (d->able & IC_BOOTP)) ic_bootp_send_if(d, jiffies - start_jiffies); diff -puN net/ipv4/ipip.c~git-net net/ipv4/ipip.c --- a/net/ipv4/ipip.c~git-net +++ a/net/ipv4/ipip.c @@ -280,8 +280,8 @@ static int ipip_err(struct sk_buff *skb, ICMP in the real Internet is absolutely infeasible. */ struct iphdr *iph = (struct iphdr*)skb->data; - int type = skb->h.icmph->type; - int code = skb->h.icmph->code; + const int type = icmp_hdr(skb)->type; + const int code = icmp_hdr(skb)->code; struct ip_tunnel *t; int err; @@ -336,8 +336,8 @@ out: struct iphdr *iph = (struct iphdr*)dp; int hlen = iph->ihl<<2; struct iphdr *eiph; - int type = skb->h.icmph->type; - int code = skb->h.icmph->code; + const int type = icmp_hdr(skb)->type; + const int code = icmp_hdr(skb)->code; int rel_type = 0; int rel_code = 0; __be32 rel_info = 0; @@ -354,7 +354,7 @@ out: default: return 0; case ICMP_PARAMETERPROB: - n = ntohl(skb->h.icmph->un.gateway) >> 24; + n = ntohl(icmp_hdr(skb)->un.gateway) >> 24; if (n < hlen) return 0; @@ -373,7 +373,7 @@ out: return 0; case ICMP_FRAG_NEEDED: /* And it is the only really necessary thing :-) */ - n = ntohs(skb->h.icmph->un.frag.mtu); + n = ntohs(icmp_hdr(skb)->un.frag.mtu); if (n < hlen+68) return 0; n -= hlen; @@ -405,7 +405,7 @@ out: dst_release(skb2->dst); skb2->dst = NULL; skb_pull(skb2, skb->data - (u8*)eiph); - skb2->nh.raw = skb2->data; + skb_reset_network_header(skb2); /* Try to guess incoming interface */ memset(&fl, 0, sizeof(fl)); @@ -461,9 +461,10 @@ out: #endif } -static inline void ipip_ecn_decapsulate(struct iphdr *outer_iph, struct sk_buff *skb) +static inline void ipip_ecn_decapsulate(const struct iphdr *outer_iph, + struct sk_buff *skb) { - struct iphdr *inner_iph = skb->nh.iph; + struct iphdr *inner_iph = ip_hdr(skb); if (INET_ECN_is_ce(outer_iph->tos)) IP_ECN_set_ce(inner_iph); @@ -471,10 +472,8 @@ static inline void ipip_ecn_decapsulate( static int ipip_rcv(struct sk_buff *skb) { - struct iphdr *iph; struct ip_tunnel *tunnel; - - iph = skb->nh.iph; + const struct iphdr *iph = ip_hdr(skb); read_lock(&ipip_lock); if ((tunnel = ipip_tunnel_lookup(iph->saddr, iph->daddr)) != NULL) { @@ -486,8 +485,8 @@ static int ipip_rcv(struct sk_buff *skb) secpath_reset(skb); - skb->mac.raw = skb->nh.raw; - skb->nh.raw = skb->data; + skb->mac_header = skb->network_header; + skb_reset_network_header(skb); skb->protocol = htons(ETH_P_IP); skb->pkt_type = PACKET_HOST; @@ -521,7 +520,7 @@ static int ipip_tunnel_xmit(struct sk_bu __be16 df = tiph->frag_off; struct rtable *rt; /* Route to the other host */ struct net_device *tdev; /* Device to other host */ - struct iphdr *old_iph = skb->nh.iph; + struct iphdr *old_iph = ip_hdr(skb); struct iphdr *iph; /* Our new IP header */ int max_headroom; /* The extra header space needed */ __be32 dst = tiph->daddr; @@ -615,11 +614,12 @@ static int ipip_tunnel_xmit(struct sk_bu skb_set_owner_w(new_skb, skb->sk); dev_kfree_skb(skb); skb = new_skb; - old_iph = skb->nh.iph; + old_iph = ip_hdr(skb); } - skb->h.raw = skb->nh.raw; - skb->nh.raw = skb_push(skb, sizeof(struct iphdr)); + skb->transport_header = skb->network_header; + skb_push(skb, sizeof(struct iphdr)); + skb_reset_network_header(skb); memset(&(IPCB(skb)->opt), 0, sizeof(IPCB(skb)->opt)); IPCB(skb)->flags &= ~(IPSKB_XFRM_TUNNEL_SIZE | IPSKB_XFRM_TRANSFORMED | IPSKB_REROUTED); @@ -630,7 +630,7 @@ static int ipip_tunnel_xmit(struct sk_bu * Push down and install the IPIP header. */ - iph = skb->nh.iph; + iph = ip_hdr(skb); iph->version = 4; iph->ihl = sizeof(struct iphdr)>>2; iph->frag_off = df; diff -puN net/ipv4/ipmr.c~git-net net/ipv4/ipmr.c --- a/net/ipv4/ipmr.c~git-net +++ a/net/ipv4/ipmr.c @@ -62,6 +62,7 @@ #include #include #include +#include #if defined(CONFIG_IP_PIMSM_V1) || defined(CONFIG_IP_PIMSM_V2) #define CONFIG_IP_PIMSM 1 @@ -302,8 +303,8 @@ static void ipmr_destroy_unres(struct mf atomic_dec(&cache_resolve_queue_len); - while((skb=skb_dequeue(&c->mfc_un.unres.unresolved))) { - if (skb->nh.iph->version == 0) { + while ((skb=skb_dequeue(&c->mfc_un.unres.unresolved))) { + if (ip_hdr(skb)->version == 0) { struct nlmsghdr *nlh = (struct nlmsghdr *)skb_pull(skb, sizeof(struct iphdr)); nlh->nlmsg_type = NLMSG_ERROR; nlh->nlmsg_len = NLMSG_LENGTH(sizeof(struct nlmsgerr)); @@ -479,7 +480,7 @@ static struct mfc_cache *ipmr_cache_find static struct mfc_cache *ipmr_cache_alloc(void) { struct mfc_cache *c=kmem_cache_zalloc(mrt_cachep, GFP_KERNEL); - if(c==NULL) + if (c==NULL) return NULL; c->mfc_un.res.minvif = MAXVIFS; return c; @@ -488,7 +489,7 @@ static struct mfc_cache *ipmr_cache_allo static struct mfc_cache *ipmr_cache_alloc_unres(void) { struct mfc_cache *c=kmem_cache_zalloc(mrt_cachep, GFP_ATOMIC); - if(c==NULL) + if (c==NULL) return NULL; skb_queue_head_init(&c->mfc_un.unres.unresolved); c->mfc_un.unres.expires = jiffies + 10*HZ; @@ -508,12 +509,13 @@ static void ipmr_cache_resolve(struct mf * Play the pending entries through our router */ - while((skb=__skb_dequeue(&uc->mfc_un.unres.unresolved))) { - if (skb->nh.iph->version == 0) { + while ((skb=__skb_dequeue(&uc->mfc_un.unres.unresolved))) { + if (ip_hdr(skb)->version == 0) { struct nlmsghdr *nlh = (struct nlmsghdr *)skb_pull(skb, sizeof(struct iphdr)); if (ipmr_fill_mroute(skb, c, NLMSG_DATA(nlh)) > 0) { - nlh->nlmsg_len = skb->tail - (u8*)nlh; + nlh->nlmsg_len = (skb_tail_pointer(skb) - + (u8 *)nlh); } else { nlh->nlmsg_type = NLMSG_ERROR; nlh->nlmsg_len = NLMSG_LENGTH(sizeof(struct nlmsgerr)); @@ -539,7 +541,7 @@ static void ipmr_cache_resolve(struct mf static int ipmr_cache_report(struct sk_buff *pkt, vifi_t vifi, int assert) { struct sk_buff *skb; - int ihl = pkt->nh.iph->ihl<<2; + const int ihl = ip_hdrlen(pkt); struct igmphdr *igmp; struct igmpmsg *msg; int ret; @@ -551,7 +553,7 @@ static int ipmr_cache_report(struct sk_b #endif skb = alloc_skb(128, GFP_ATOMIC); - if(!skb) + if (!skb) return -ENOBUFS; #ifdef CONFIG_IP_PIMSM @@ -561,14 +563,17 @@ static int ipmr_cache_report(struct sk_b And all this only to mangle msg->im_msgtype and to set msg->im_mbz to "mbz" :-) */ - msg = (struct igmpmsg*)skb_push(skb, sizeof(struct iphdr)); - skb->nh.raw = skb->h.raw = (u8*)msg; - memcpy(msg, pkt->nh.raw, sizeof(struct iphdr)); + skb_push(skb, sizeof(struct iphdr)); + skb_reset_network_header(skb); + skb_reset_transport_header(skb); + msg = (struct igmpmsg *)skb_network_header(skb); + memcpy(msg, skb_network_header(pkt), sizeof(struct iphdr)); msg->im_msgtype = IGMPMSG_WHOLEPKT; msg->im_mbz = 0; msg->im_vif = reg_vif_num; - skb->nh.iph->ihl = sizeof(struct iphdr) >> 2; - skb->nh.iph->tot_len = htons(ntohs(pkt->nh.iph->tot_len) + sizeof(struct iphdr)); + ip_hdr(skb)->ihl = sizeof(struct iphdr) >> 2; + ip_hdr(skb)->tot_len = htons(ntohs(ip_hdr(pkt)->tot_len) + + sizeof(struct iphdr)); } else #endif { @@ -577,10 +582,11 @@ static int ipmr_cache_report(struct sk_b * Copy the IP header */ - skb->nh.iph = (struct iphdr *)skb_put(skb, ihl); - memcpy(skb->data,pkt->data,ihl); - skb->nh.iph->protocol = 0; /* Flag to the kernel this is a route add */ - msg = (struct igmpmsg*)skb->nh.iph; + skb->network_header = skb->tail; + skb_put(skb, ihl); + skb_copy_to_linear_data(skb, pkt->data, ihl); + ip_hdr(skb)->protocol = 0; /* Flag to the kernel this is a route add */ + msg = (struct igmpmsg *)skb_network_header(skb); msg->im_vif = vifi; skb->dst = dst_clone(pkt->dst); @@ -592,8 +598,8 @@ static int ipmr_cache_report(struct sk_b igmp->type = msg->im_msgtype = assert; igmp->code = 0; - skb->nh.iph->tot_len=htons(skb->len); /* Fix the length */ - skb->h.raw = skb->nh.raw; + ip_hdr(skb)->tot_len = htons(skb->len); /* Fix the length */ + skb->transport_header = skb->network_header; } if (mroute_socket == NULL) { @@ -622,11 +628,12 @@ ipmr_cache_unresolved(vifi_t vifi, struc { int err; struct mfc_cache *c; + const struct iphdr *iph = ip_hdr(skb); spin_lock_bh(&mfc_unres_lock); for (c=mfc_unres_queue; c; c=c->next) { - if (c->mfc_mcastgrp == skb->nh.iph->daddr && - c->mfc_origin == skb->nh.iph->saddr) + if (c->mfc_mcastgrp == iph->daddr && + c->mfc_origin == iph->saddr) break; } @@ -646,9 +653,9 @@ ipmr_cache_unresolved(vifi_t vifi, struc /* * Fill in the new cache entry */ - c->mfc_parent=-1; - c->mfc_origin=skb->nh.iph->saddr; - c->mfc_mcastgrp=skb->nh.iph->daddr; + c->mfc_parent = -1; + c->mfc_origin = iph->saddr; + c->mfc_mcastgrp = iph->daddr; /* * Reflect first query at mrouted. @@ -734,7 +741,7 @@ static int ipmr_mfc_add(struct mfcctl *m return 0; } - if(!MULTICAST(mfc->mfcc_mcastgrp.s_addr)) + if (!MULTICAST(mfc->mfcc_mcastgrp.s_addr)) return -EINVAL; c=ipmr_cache_alloc(); @@ -788,7 +795,7 @@ static void mroute_clean_tables(struct s /* * Shut down all active vif entries */ - for(i=0; isk_type != SOCK_RAW || - inet_sk(sk)->num != IPPROTO_IGMP) - return -EOPNOTSUPP; - if(optlen!=sizeof(int)) - return -ENOPROTOOPT; - - rtnl_lock(); - if (mroute_socket) { - rtnl_unlock(); - return -EADDRINUSE; - } - - ret = ip_ra_control(sk, 1, mrtsock_destruct); - if (ret == 0) { - write_lock_bh(&mrt_lock); - mroute_socket=sk; - write_unlock_bh(&mrt_lock); + switch (optname) { + case MRT_INIT: + if (sk->sk_type != SOCK_RAW || + inet_sk(sk)->num != IPPROTO_IGMP) + return -EOPNOTSUPP; + if (optlen!=sizeof(int)) + return -ENOPROTOOPT; - ipv4_devconf.mc_forwarding++; - } + rtnl_lock(); + if (mroute_socket) { rtnl_unlock(); - return ret; - case MRT_DONE: - if (sk!=mroute_socket) - return -EACCES; - return ip_ra_control(sk, 0, NULL); - case MRT_ADD_VIF: - case MRT_DEL_VIF: - if(optlen!=sizeof(vif)) - return -EINVAL; - if (copy_from_user(&vif,optval,sizeof(vif))) - return -EFAULT; - if(vif.vifc_vifi >= MAXVIFS) - return -ENFILE; - rtnl_lock(); - if (optname==MRT_ADD_VIF) { - ret = vif_add(&vif, sk==mroute_socket); - } else { - ret = vif_delete(vif.vifc_vifi); - } - rtnl_unlock(); - return ret; + return -EADDRINUSE; + } + + ret = ip_ra_control(sk, 1, mrtsock_destruct); + if (ret == 0) { + write_lock_bh(&mrt_lock); + mroute_socket=sk; + write_unlock_bh(&mrt_lock); + + ipv4_devconf.mc_forwarding++; + } + rtnl_unlock(); + return ret; + case MRT_DONE: + if (sk!=mroute_socket) + return -EACCES; + return ip_ra_control(sk, 0, NULL); + case MRT_ADD_VIF: + case MRT_DEL_VIF: + if (optlen!=sizeof(vif)) + return -EINVAL; + if (copy_from_user(&vif,optval,sizeof(vif))) + return -EFAULT; + if (vif.vifc_vifi >= MAXVIFS) + return -ENFILE; + rtnl_lock(); + if (optname==MRT_ADD_VIF) { + ret = vif_add(&vif, sk==mroute_socket); + } else { + ret = vif_delete(vif.vifc_vifi); + } + rtnl_unlock(); + return ret; /* * Manipulate the forwarding caches. These live * in a sort of kernel/user symbiosis. */ - case MRT_ADD_MFC: - case MRT_DEL_MFC: - if(optlen!=sizeof(mfc)) - return -EINVAL; - if (copy_from_user(&mfc,optval, sizeof(mfc))) - return -EFAULT; - rtnl_lock(); - if (optname==MRT_DEL_MFC) - ret = ipmr_mfc_delete(&mfc); - else - ret = ipmr_mfc_add(&mfc, sk==mroute_socket); - rtnl_unlock(); - return ret; + case MRT_ADD_MFC: + case MRT_DEL_MFC: + if (optlen!=sizeof(mfc)) + return -EINVAL; + if (copy_from_user(&mfc,optval, sizeof(mfc))) + return -EFAULT; + rtnl_lock(); + if (optname==MRT_DEL_MFC) + ret = ipmr_mfc_delete(&mfc); + else + ret = ipmr_mfc_add(&mfc, sk==mroute_socket); + rtnl_unlock(); + return ret; /* * Control PIM assert. */ - case MRT_ASSERT: - { - int v; - if(get_user(v,(int __user *)optval)) - return -EFAULT; - mroute_do_assert=(v)?1:0; - return 0; - } + case MRT_ASSERT: + { + int v; + if (get_user(v,(int __user *)optval)) + return -EFAULT; + mroute_do_assert=(v)?1:0; + return 0; + } #ifdef CONFIG_IP_PIMSM - case MRT_PIM: - { - int v, ret; - if(get_user(v,(int __user *)optval)) - return -EFAULT; - v = (v)?1:0; - rtnl_lock(); - ret = 0; - if (v != mroute_do_pim) { - mroute_do_pim = v; - mroute_do_assert = v; + case MRT_PIM: + { + int v, ret; + if (get_user(v,(int __user *)optval)) + return -EFAULT; + v = (v)?1:0; + rtnl_lock(); + ret = 0; + if (v != mroute_do_pim) { + mroute_do_pim = v; + mroute_do_assert = v; #ifdef CONFIG_IP_PIMSM_V2 - if (mroute_do_pim) - ret = inet_add_protocol(&pim_protocol, - IPPROTO_PIM); - else - ret = inet_del_protocol(&pim_protocol, - IPPROTO_PIM); - if (ret < 0) - ret = -EAGAIN; + if (mroute_do_pim) + ret = inet_add_protocol(&pim_protocol, + IPPROTO_PIM); + else + ret = inet_del_protocol(&pim_protocol, + IPPROTO_PIM); + if (ret < 0) + ret = -EAGAIN; #endif - } - rtnl_unlock(); - return ret; } + rtnl_unlock(); + return ret; + } #endif - /* - * Spurious command, or MRT_VERSION which you cannot - * set. - */ - default: - return -ENOPROTOOPT; + /* + * Spurious command, or MRT_VERSION which you cannot + * set. + */ + default: + return -ENOPROTOOPT; } } @@ -983,7 +988,7 @@ int ip_mroute_getsockopt(struct sock *sk int olr; int val; - if(optname!=MRT_VERSION && + if (optname!=MRT_VERSION && #ifdef CONFIG_IP_PIMSM optname!=MRT_PIM && #endif @@ -997,17 +1002,17 @@ int ip_mroute_getsockopt(struct sock *sk if (olr < 0) return -EINVAL; - if(put_user(olr,optlen)) + if (put_user(olr,optlen)) return -EFAULT; - if(optname==MRT_VERSION) + if (optname==MRT_VERSION) val=0x0305; #ifdef CONFIG_IP_PIMSM - else if(optname==MRT_PIM) + else if (optname==MRT_PIM) val=mroute_do_pim; #endif else val=mroute_do_assert; - if(copy_to_user(optval,&val,olr)) + if (copy_to_user(optval,&val,olr)) return -EFAULT; return 0; } @@ -1023,48 +1028,47 @@ int ipmr_ioctl(struct sock *sk, int cmd, struct vif_device *vif; struct mfc_cache *c; - switch(cmd) - { - case SIOCGETVIFCNT: - if (copy_from_user(&vr,arg,sizeof(vr))) - return -EFAULT; - if(vr.vifi>=maxvif) - return -EINVAL; - read_lock(&mrt_lock); - vif=&vif_table[vr.vifi]; - if(VIF_EXISTS(vr.vifi)) { - vr.icount=vif->pkt_in; - vr.ocount=vif->pkt_out; - vr.ibytes=vif->bytes_in; - vr.obytes=vif->bytes_out; - read_unlock(&mrt_lock); - - if (copy_to_user(arg,&vr,sizeof(vr))) - return -EFAULT; - return 0; - } + switch (cmd) { + case SIOCGETVIFCNT: + if (copy_from_user(&vr,arg,sizeof(vr))) + return -EFAULT; + if (vr.vifi>=maxvif) + return -EINVAL; + read_lock(&mrt_lock); + vif=&vif_table[vr.vifi]; + if (VIF_EXISTS(vr.vifi)) { + vr.icount=vif->pkt_in; + vr.ocount=vif->pkt_out; + vr.ibytes=vif->bytes_in; + vr.obytes=vif->bytes_out; read_unlock(&mrt_lock); - return -EADDRNOTAVAIL; - case SIOCGETSGCNT: - if (copy_from_user(&sr,arg,sizeof(sr))) - return -EFAULT; - read_lock(&mrt_lock); - c = ipmr_cache_find(sr.src.s_addr, sr.grp.s_addr); - if (c) { - sr.pktcnt = c->mfc_un.res.pkt; - sr.bytecnt = c->mfc_un.res.bytes; - sr.wrong_if = c->mfc_un.res.wrong_if; - read_unlock(&mrt_lock); - - if (copy_to_user(arg,&sr,sizeof(sr))) - return -EFAULT; - return 0; - } + if (copy_to_user(arg,&vr,sizeof(vr))) + return -EFAULT; + return 0; + } + read_unlock(&mrt_lock); + return -EADDRNOTAVAIL; + case SIOCGETSGCNT: + if (copy_from_user(&sr,arg,sizeof(sr))) + return -EFAULT; + + read_lock(&mrt_lock); + c = ipmr_cache_find(sr.src.s_addr, sr.grp.s_addr); + if (c) { + sr.pktcnt = c->mfc_un.res.pkt; + sr.bytecnt = c->mfc_un.res.bytes; + sr.wrong_if = c->mfc_un.res.wrong_if; read_unlock(&mrt_lock); - return -EADDRNOTAVAIL; - default: - return -ENOIOCTLCMD; + + if (copy_to_user(arg,&sr,sizeof(sr))) + return -EFAULT; + return 0; + } + read_unlock(&mrt_lock); + return -EADDRNOTAVAIL; + default: + return -ENOIOCTLCMD; } } @@ -1076,7 +1080,7 @@ static int ipmr_device_event(struct noti if (event != NETDEV_UNREGISTER) return NOTIFY_DONE; v=&vif_table[0]; - for(ct=0;ctdev==ptr) vif_delete(ct); } @@ -1096,11 +1100,17 @@ static struct notifier_block ip_mr_notif static void ip_encap(struct sk_buff *skb, __be32 saddr, __be32 daddr) { - struct iphdr *iph = (struct iphdr *)skb_push(skb,sizeof(struct iphdr)); + struct iphdr *iph; + struct iphdr *old_iph = ip_hdr(skb); + + skb_push(skb, sizeof(struct iphdr)); + skb->transport_header = skb->network_header; + skb_reset_network_header(skb); + iph = ip_hdr(skb); iph->version = 4; - iph->tos = skb->nh.iph->tos; - iph->ttl = skb->nh.iph->ttl; + iph->tos = old_iph->tos; + iph->ttl = old_iph->ttl; iph->frag_off = 0; iph->daddr = daddr; iph->saddr = saddr; @@ -1110,8 +1120,6 @@ static void ip_encap(struct sk_buff *skb ip_select_ident(iph, skb->dst, NULL); ip_send_check(iph); - skb->h.ipiph = skb->nh.iph; - skb->nh.iph = iph; memset(&(IPCB(skb)->opt), 0, sizeof(IPCB(skb)->opt)); nf_reset(skb); } @@ -1134,7 +1142,7 @@ static inline int ipmr_forward_finish(st static void ipmr_queue_xmit(struct sk_buff *skb, struct mfc_cache *c, int vifi) { - struct iphdr *iph = skb->nh.iph; + const struct iphdr *iph = ip_hdr(skb); struct vif_device *vif = &vif_table[vifi]; struct net_device *dev; struct rtable *rt; @@ -1200,8 +1208,7 @@ static void ipmr_queue_xmit(struct sk_bu dst_release(skb->dst); skb->dst = &rt->u.dst; - iph = skb->nh.iph; - ip_decrease_ttl(iph); + ip_decrease_ttl(ip_hdr(skb)); /* FIXME: forward and output firewalls used to be called here. * What do we do with netfilter? -- RR */ @@ -1301,7 +1308,7 @@ static int ip_mr_forward(struct sk_buff * Forward the frame */ for (ct = cache->mfc_un.res.maxvif-1; ct >= cache->mfc_un.res.minvif; ct--) { - if (skb->nh.iph->ttl > cache->mfc_un.res.ttls[ct]) { + if (ip_hdr(skb)->ttl > cache->mfc_un.res.ttls[ct]) { if (psend != -1) { struct sk_buff *skb2 = skb_clone(skb, GFP_ATOMIC); if (skb2) @@ -1347,7 +1354,7 @@ int ip_mr_input(struct sk_buff *skb) if (IPCB(skb)->opt.router_alert) { if (ip_call_ra_chain(skb)) return 0; - } else if (skb->nh.iph->protocol == IPPROTO_IGMP){ + } else if (ip_hdr(skb)->protocol == IPPROTO_IGMP){ /* IGMPv1 (and broken IGMPv2 implementations sort of Cisco IOS <= 11.2(8)) do not put router alert option to IGMP packets destined to routable @@ -1366,7 +1373,7 @@ int ip_mr_input(struct sk_buff *skb) } read_lock(&mrt_lock); - cache = ipmr_cache_find(skb->nh.iph->saddr, skb->nh.iph->daddr); + cache = ipmr_cache_find(ip_hdr(skb)->saddr, ip_hdr(skb)->daddr); /* * No usable cache entry @@ -1426,14 +1433,15 @@ int pim_rcv_v1(struct sk_buff * skb) if (!pskb_may_pull(skb, sizeof(*pim) + sizeof(*encap))) goto drop; - pim = (struct igmphdr*)skb->h.raw; + pim = igmp_hdr(skb); if (!mroute_do_pim || skb->len < sizeof(*pim) + sizeof(*encap) || pim->group != PIM_V1_VERSION || pim->code != PIM_V1_REGISTER) goto drop; - encap = (struct iphdr*)(skb->h.raw + sizeof(struct igmphdr)); + encap = (struct iphdr *)(skb_transport_header(skb) + + sizeof(struct igmphdr)); /* Check that: a. packet is really destinted to a multicast group @@ -1455,9 +1463,9 @@ int pim_rcv_v1(struct sk_buff * skb) if (reg_dev == NULL) goto drop; - skb->mac.raw = skb->nh.raw; + skb->mac_header = skb->network_header; skb_pull(skb, (u8*)encap - skb->data); - skb->nh.iph = (struct iphdr *)skb->data; + skb_reset_network_header(skb); skb->dev = reg_dev; skb->protocol = htons(ETH_P_IP); skb->ip_summed = 0; @@ -1486,7 +1494,7 @@ static int pim_rcv(struct sk_buff * skb) if (!pskb_may_pull(skb, sizeof(*pim) + sizeof(*encap))) goto drop; - pim = (struct pimreghdr*)skb->h.raw; + pim = (struct pimreghdr *)skb_transport_header(skb); if (pim->type != ((PIM_VERSION<<4)|(PIM_REGISTER)) || (pim->flags&PIM_NULL_REGISTER) || (ip_compute_csum((void *)pim, sizeof(*pim)) != 0 && @@ -1494,7 +1502,8 @@ static int pim_rcv(struct sk_buff * skb) goto drop; /* check if the inner packet is destined to mcast group */ - encap = (struct iphdr*)(skb->h.raw + sizeof(struct pimreghdr)); + encap = (struct iphdr *)(skb_transport_header(skb) + + sizeof(struct pimreghdr)); if (!MULTICAST(encap->daddr) || encap->tot_len == 0 || ntohs(encap->tot_len) + sizeof(*pim) > skb->len) @@ -1510,9 +1519,9 @@ static int pim_rcv(struct sk_buff * skb) if (reg_dev == NULL) goto drop; - skb->mac.raw = skb->nh.raw; + skb->mac_header = skb->network_header; skb_pull(skb, (u8*)encap - skb->data); - skb->nh.iph = (struct iphdr *)skb->data; + skb_reset_network_header(skb); skb->dev = reg_dev; skb->protocol = htons(ETH_P_IP); skb->ip_summed = 0; @@ -1537,7 +1546,7 @@ ipmr_fill_mroute(struct sk_buff *skb, st int ct; struct rtnexthop *nhp; struct net_device *dev = vif_table[c->mfc_parent].dev; - u8 *b = skb->tail; + u8 *b = skb_tail_pointer(skb); struct rtattr *mp_head; if (dev) @@ -1557,12 +1566,12 @@ ipmr_fill_mroute(struct sk_buff *skb, st } } mp_head->rta_type = RTA_MULTIPATH; - mp_head->rta_len = skb->tail - (u8*)mp_head; + mp_head->rta_len = skb_tail_pointer(skb) - (u8 *)mp_head; rtm->rtm_type = RTN_MULTICAST; return 1; rtattr_failure: - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); return -EMSGSIZE; } @@ -1577,6 +1586,7 @@ int ipmr_get_route(struct sk_buff *skb, if (cache==NULL) { struct sk_buff *skb2; + struct iphdr *iph; struct net_device *dev; int vif; @@ -1596,11 +1606,13 @@ int ipmr_get_route(struct sk_buff *skb, return -ENOMEM; } - skb2->nh.raw = skb_push(skb2, sizeof(struct iphdr)); - skb2->nh.iph->ihl = sizeof(struct iphdr)>>2; - skb2->nh.iph->saddr = rt->rt_src; - skb2->nh.iph->daddr = rt->rt_dst; - skb2->nh.iph->version = 0; + skb_push(skb2, sizeof(struct iphdr)); + skb_reset_network_header(skb2); + iph = ip_hdr(skb2); + iph->ihl = sizeof(struct iphdr) >> 2; + iph->saddr = rt->rt_src; + iph->daddr = rt->rt_dst; + iph->version = 0; err = ipmr_cache_unresolved(vif, skb2); read_unlock(&mrt_lock); return err; @@ -1625,7 +1637,7 @@ static struct vif_device *ipmr_vif_seq_i loff_t pos) { for (iter->ct = 0; iter->ct < maxvif; ++iter->ct) { - if(!VIF_EXISTS(iter->ct)) + if (!VIF_EXISTS(iter->ct)) continue; if (pos-- == 0) return &vif_table[iter->ct]; @@ -1649,7 +1661,7 @@ static void *ipmr_vif_seq_next(struct se return ipmr_vif_seq_idx(iter, 0); while (++iter->ct < maxvif) { - if(!VIF_EXISTS(iter->ct)) + if (!VIF_EXISTS(iter->ct)) continue; return &vif_table[iter->ct]; } @@ -1680,7 +1692,7 @@ static int ipmr_vif_seq_show(struct seq_ return 0; } -static struct seq_operations ipmr_vif_seq_ops = { +static const struct seq_operations ipmr_vif_seq_ops = { .start = ipmr_vif_seq_start, .next = ipmr_vif_seq_next, .stop = ipmr_vif_seq_stop, @@ -1732,14 +1744,14 @@ static struct mfc_cache *ipmr_mfc_seq_id it->cache = mfc_cache_array; read_lock(&mrt_lock); for (it->ct = 0; it->ct < MFC_LINES; it->ct++) - for(mfc = mfc_cache_array[it->ct]; mfc; mfc = mfc->next) + for (mfc = mfc_cache_array[it->ct]; mfc; mfc = mfc->next) if (pos-- == 0) return mfc; read_unlock(&mrt_lock); it->cache = &mfc_unres_queue; spin_lock_bh(&mfc_unres_lock); - for(mfc = mfc_unres_queue; mfc; mfc = mfc->next) + for (mfc = mfc_unres_queue; mfc; mfc = mfc->next) if (pos-- == 0) return mfc; spin_unlock_bh(&mfc_unres_lock); @@ -1829,9 +1841,9 @@ static int ipmr_mfc_seq_show(struct seq_ mfc->mfc_un.res.wrong_if); if (it->cache != &mfc_unres_queue) { - for(n = mfc->mfc_un.res.minvif; - n < mfc->mfc_un.res.maxvif; n++ ) { - if(VIF_EXISTS(n) + for (n = mfc->mfc_un.res.minvif; + n < mfc->mfc_un.res.maxvif; n++ ) { + if (VIF_EXISTS(n) && mfc->mfc_un.res.ttls[n] < 255) seq_printf(seq, " %2d:%-3d", @@ -1843,7 +1855,7 @@ static int ipmr_mfc_seq_show(struct seq_ return 0; } -static struct seq_operations ipmr_mfc_seq_ops = { +static const struct seq_operations ipmr_mfc_seq_ops = { .start = ipmr_mfc_seq_start, .next = ipmr_mfc_seq_next, .stop = ipmr_mfc_seq_stop, diff -puN net/ipv4/ipvs/ip_vs_app.c~git-net net/ipv4/ipvs/ip_vs_app.c --- a/net/ipv4/ipvs/ip_vs_app.c~git-net +++ a/net/ipv4/ipvs/ip_vs_app.c @@ -331,14 +331,14 @@ static inline int app_tcp_pkt_out(struct struct ip_vs_app *app) { int diff; - unsigned int tcp_offset = (*pskb)->nh.iph->ihl*4; + const unsigned int tcp_offset = ip_hdrlen(*pskb); struct tcphdr *th; __u32 seq; if (!ip_vs_make_skb_writable(pskb, tcp_offset + sizeof(*th))) return 0; - th = (struct tcphdr *)((*pskb)->nh.raw + tcp_offset); + th = (struct tcphdr *)(skb_network_header(*pskb) + tcp_offset); /* * Remember seq number in case this pkt gets resized @@ -406,14 +406,14 @@ static inline int app_tcp_pkt_in(struct struct ip_vs_app *app) { int diff; - unsigned int tcp_offset = (*pskb)->nh.iph->ihl*4; + const unsigned int tcp_offset = ip_hdrlen(*pskb); struct tcphdr *th; __u32 seq; if (!ip_vs_make_skb_writable(pskb, tcp_offset + sizeof(*th))) return 0; - th = (struct tcphdr *)((*pskb)->nh.raw + tcp_offset); + th = (struct tcphdr *)(skb_network_header(*pskb) + tcp_offset); /* * Remember seq number in case this pkt gets resized @@ -577,7 +577,6 @@ static const struct file_operations ip_v int ip_vs_skb_replace(struct sk_buff *skb, gfp_t pri, char *o_buf, int o_len, char *n_buf, int n_len) { - struct iphdr *iph; int diff; int o_offset; int o_left; @@ -603,12 +602,11 @@ int ip_vs_skb_replace(struct sk_buff *sk skb_put(skb, diff); memmove(skb->data + o_offset + n_len, skb->data + o_offset + o_len, o_left); - memcpy(skb->data + o_offset, n_buf, n_len); + skb_copy_to_linear_data_offset(skb, o_offset, n_buf, n_len); } /* must update the iph total length here */ - iph = skb->nh.iph; - iph->tot_len = htons(skb->len); + ip_hdr(skb)->tot_len = htons(skb->len); LeaveFunction(9); return 0; diff -puN net/ipv4/ipvs/ip_vs_core.c~git-net net/ipv4/ipvs/ip_vs_core.c --- a/net/ipv4/ipvs/ip_vs_core.c~git-net +++ a/net/ipv4/ipvs/ip_vs_core.c @@ -212,7 +212,7 @@ ip_vs_sched_persist(struct ip_vs_service __be16 ports[2]) { struct ip_vs_conn *cp = NULL; - struct iphdr *iph = skb->nh.iph; + struct iphdr *iph = ip_hdr(skb); struct ip_vs_dest *dest; struct ip_vs_conn *ct; __be16 dport; /* destination port to forward */ @@ -381,7 +381,7 @@ struct ip_vs_conn * ip_vs_schedule(struct ip_vs_service *svc, const struct sk_buff *skb) { struct ip_vs_conn *cp = NULL; - struct iphdr *iph = skb->nh.iph; + struct iphdr *iph = ip_hdr(skb); struct ip_vs_dest *dest; __be16 _ports[2], *pptr; @@ -447,7 +447,7 @@ int ip_vs_leave(struct ip_vs_service *sv struct ip_vs_protocol *pp) { __be16 _ports[2], *pptr; - struct iphdr *iph = skb->nh.iph; + struct iphdr *iph = ip_hdr(skb); pptr = skb_header_pointer(skb, iph->ihl*4, sizeof(_ports), _ports); @@ -546,7 +546,7 @@ ip_vs_gather_frags(struct sk_buff *skb, { skb = ip_defrag(skb, user); if (skb) - ip_send_check(skb->nh.iph); + ip_send_check(ip_hdr(skb)); return skb; } @@ -557,9 +557,10 @@ ip_vs_gather_frags(struct sk_buff *skb, void ip_vs_nat_icmp(struct sk_buff *skb, struct ip_vs_protocol *pp, struct ip_vs_conn *cp, int inout) { - struct iphdr *iph = skb->nh.iph; + struct iphdr *iph = ip_hdr(skb); unsigned int icmp_offset = iph->ihl*4; - struct icmphdr *icmph = (struct icmphdr *)(skb->nh.raw + icmp_offset); + struct icmphdr *icmph = (struct icmphdr *)(skb_network_header(skb) + + icmp_offset); struct iphdr *ciph = (struct iphdr *)(icmph + 1); if (inout) { @@ -617,14 +618,14 @@ static int ip_vs_out_icmp(struct sk_buff *related = 1; /* reassemble IP fragments */ - if (skb->nh.iph->frag_off & __constant_htons(IP_MF|IP_OFFSET)) { + if (ip_hdr(skb)->frag_off & htons(IP_MF | IP_OFFSET)) { skb = ip_vs_gather_frags(skb, IP_DEFRAG_VS_OUT); if (!skb) return NF_STOLEN; *pskb = skb; } - iph = skb->nh.iph; + iph = ip_hdr(skb); offset = ihl = iph->ihl * 4; ic = skb_header_pointer(skb, offset, sizeof(_icmph), &_icmph); if (ic == NULL) @@ -659,7 +660,7 @@ static int ip_vs_out_icmp(struct sk_buff return NF_ACCEPT; /* Is the embedded protocol header present? */ - if (unlikely(cih->frag_off & __constant_htons(IP_OFFSET) && + if (unlikely(cih->frag_off & htons(IP_OFFSET) && pp->dont_defrag)) return NF_ACCEPT; @@ -680,8 +681,7 @@ static int ip_vs_out_icmp(struct sk_buff } /* Ensure the checksum is correct */ - if (skb->ip_summed != CHECKSUM_UNNECESSARY && - ip_vs_checksum_complete(skb, ihl)) { + if (!skb_csum_unnecessary(skb) && ip_vs_checksum_complete(skb, ihl)) { /* Failed checksum! */ IP_VS_DBG(1, "Forward ICMP: failed checksum from %d.%d.%d.%d!\n", NIPQUAD(iph->saddr)); @@ -712,8 +712,7 @@ static inline int is_tcp_reset(const str { struct tcphdr _tcph, *th; - th = skb_header_pointer(skb, skb->nh.iph->ihl * 4, - sizeof(_tcph), &_tcph); + th = skb_header_pointer(skb, ip_hdrlen(skb), sizeof(_tcph), &_tcph); if (th == NULL) return 0; return th->rst; @@ -740,14 +739,14 @@ ip_vs_out(unsigned int hooknum, struct s if (skb->ipvs_property) return NF_ACCEPT; - iph = skb->nh.iph; + iph = ip_hdr(skb); if (unlikely(iph->protocol == IPPROTO_ICMP)) { int related, verdict = ip_vs_out_icmp(pskb, &related); if (related) return verdict; skb = *pskb; - iph = skb->nh.iph; + iph = ip_hdr(skb); } pp = ip_vs_proto_get(iph->protocol); @@ -755,12 +754,12 @@ ip_vs_out(unsigned int hooknum, struct s return NF_ACCEPT; /* reassemble IP fragments */ - if (unlikely(iph->frag_off & __constant_htons(IP_MF|IP_OFFSET) && + if (unlikely(iph->frag_off & htons(IP_MF|IP_OFFSET) && !pp->dont_defrag)) { skb = ip_vs_gather_frags(skb, IP_DEFRAG_VS_OUT); if (!skb) return NF_STOLEN; - iph = skb->nh.iph; + iph = ip_hdr(skb); *pskb = skb; } @@ -810,8 +809,8 @@ ip_vs_out(unsigned int hooknum, struct s if (pp->snat_handler && !pp->snat_handler(pskb, pp, cp)) goto drop; skb = *pskb; - skb->nh.iph->saddr = cp->vaddr; - ip_send_check(skb->nh.iph); + ip_hdr(skb)->saddr = cp->vaddr; + ip_send_check(ip_hdr(skb)); /* For policy routing, packets originating from this * machine itself may be routed differently to packets @@ -861,7 +860,7 @@ ip_vs_in_icmp(struct sk_buff **pskb, int *related = 1; /* reassemble IP fragments */ - if (skb->nh.iph->frag_off & __constant_htons(IP_MF|IP_OFFSET)) { + if (ip_hdr(skb)->frag_off & htons(IP_MF | IP_OFFSET)) { skb = ip_vs_gather_frags(skb, hooknum == NF_IP_LOCAL_IN ? IP_DEFRAG_VS_IN : IP_DEFRAG_VS_FWD); @@ -870,7 +869,7 @@ ip_vs_in_icmp(struct sk_buff **pskb, int *pskb = skb; } - iph = skb->nh.iph; + iph = ip_hdr(skb); offset = ihl = iph->ihl * 4; ic = skb_header_pointer(skb, offset, sizeof(_icmph), &_icmph); if (ic == NULL) @@ -905,7 +904,7 @@ ip_vs_in_icmp(struct sk_buff **pskb, int return NF_ACCEPT; /* Is the embedded protocol header present? */ - if (unlikely(cih->frag_off & __constant_htons(IP_OFFSET) && + if (unlikely(cih->frag_off & htons(IP_OFFSET) && pp->dont_defrag)) return NF_ACCEPT; @@ -921,8 +920,7 @@ ip_vs_in_icmp(struct sk_buff **pskb, int verdict = NF_DROP; /* Ensure the checksum is correct */ - if (skb->ip_summed != CHECKSUM_UNNECESSARY && - ip_vs_checksum_complete(skb, ihl)) { + if (!skb_csum_unnecessary(skb) && ip_vs_checksum_complete(skb, ihl)) { /* Failed checksum! */ IP_VS_DBG(1, "Incoming ICMP: failed checksum from %d.%d.%d.%d!\n", NIPQUAD(iph->saddr)); @@ -966,19 +964,19 @@ ip_vs_in(unsigned int hooknum, struct sk || skb->dev == &loopback_dev || skb->sk)) { IP_VS_DBG(12, "packet type=%d proto=%d daddr=%d.%d.%d.%d ignored\n", skb->pkt_type, - skb->nh.iph->protocol, - NIPQUAD(skb->nh.iph->daddr)); + ip_hdr(skb)->protocol, + NIPQUAD(ip_hdr(skb)->daddr)); return NF_ACCEPT; } - iph = skb->nh.iph; + iph = ip_hdr(skb); if (unlikely(iph->protocol == IPPROTO_ICMP)) { int related, verdict = ip_vs_in_icmp(pskb, &related, hooknum); if (related) return verdict; skb = *pskb; - iph = skb->nh.iph; + iph = ip_hdr(skb); } /* Protocol supported? */ @@ -1064,7 +1062,7 @@ ip_vs_forward_icmp(unsigned int hooknum, { int r; - if ((*pskb)->nh.iph->protocol != IPPROTO_ICMP) + if (ip_hdr(*pskb)->protocol != IPPROTO_ICMP) return NF_ACCEPT; return ip_vs_in_icmp(pskb, &r, hooknum); diff -puN net/ipv4/ipvs/ip_vs_dh.c~git-net net/ipv4/ipvs/ip_vs_dh.c --- a/net/ipv4/ipvs/ip_vs_dh.c~git-net +++ a/net/ipv4/ipvs/ip_vs_dh.c @@ -204,7 +204,7 @@ ip_vs_dh_schedule(struct ip_vs_service * { struct ip_vs_dest *dest; struct ip_vs_dh_bucket *tbl; - struct iphdr *iph = skb->nh.iph; + struct iphdr *iph = ip_hdr(skb); IP_VS_DBG(6, "ip_vs_dh_schedule(): Scheduling...\n"); diff -puN net/ipv4/ipvs/ip_vs_ftp.c~git-net net/ipv4/ipvs/ip_vs_ftp.c --- a/net/ipv4/ipvs/ip_vs_ftp.c~git-net +++ a/net/ipv4/ipvs/ip_vs_ftp.c @@ -159,10 +159,10 @@ static int ip_vs_ftp_out(struct ip_vs_ap return 0; if (cp->app_data == &ip_vs_ftp_pasv) { - iph = (*pskb)->nh.iph; + iph = ip_hdr(*pskb); th = (struct tcphdr *)&(((char *)iph)[iph->ihl*4]); data = (char *)th + (th->doff << 2); - data_limit = (*pskb)->tail; + data_limit = skb_tail_pointer(*pskb); if (ip_vs_ftp_get_addrport(data, data_limit, SERVER_STRING, @@ -262,14 +262,14 @@ static int ip_vs_ftp_in(struct ip_vs_app /* * Detecting whether it is passive */ - iph = (*pskb)->nh.iph; + iph = ip_hdr(*pskb); th = (struct tcphdr *)&(((char *)iph)[iph->ihl*4]); /* Since there may be OPTIONS in the TCP packet and the HLEN is the length of the header in 32-bit multiples, it is accurate to calculate data address by th+HLEN*4 */ data = data_start = (char *)th + (th->doff << 2); - data_limit = (*pskb)->tail; + data_limit = skb_tail_pointer(*pskb); while (data <= data_limit - 6) { if (strnicmp(data, "PASV\r\n", 6) == 0) { diff -puN net/ipv4/ipvs/ip_vs_lblc.c~git-net net/ipv4/ipvs/ip_vs_lblc.c --- a/net/ipv4/ipvs/ip_vs_lblc.c~git-net +++ a/net/ipv4/ipvs/ip_vs_lblc.c @@ -521,7 +521,7 @@ ip_vs_lblc_schedule(struct ip_vs_service struct ip_vs_dest *dest; struct ip_vs_lblc_table *tbl; struct ip_vs_lblc_entry *en; - struct iphdr *iph = skb->nh.iph; + struct iphdr *iph = ip_hdr(skb); IP_VS_DBG(6, "ip_vs_lblc_schedule(): Scheduling...\n"); diff -puN net/ipv4/ipvs/ip_vs_lblcr.c~git-net net/ipv4/ipvs/ip_vs_lblcr.c --- a/net/ipv4/ipvs/ip_vs_lblcr.c~git-net +++ a/net/ipv4/ipvs/ip_vs_lblcr.c @@ -775,7 +775,7 @@ ip_vs_lblcr_schedule(struct ip_vs_servic struct ip_vs_dest *dest; struct ip_vs_lblcr_table *tbl; struct ip_vs_lblcr_entry *en; - struct iphdr *iph = skb->nh.iph; + struct iphdr *iph = ip_hdr(skb); IP_VS_DBG(6, "ip_vs_lblcr_schedule(): Scheduling...\n"); diff -puN net/ipv4/ipvs/ip_vs_proto_ah.c~git-net net/ipv4/ipvs/ip_vs_proto_ah.c --- a/net/ipv4/ipvs/ip_vs_proto_ah.c~git-net +++ a/net/ipv4/ipvs/ip_vs_proto_ah.c @@ -52,15 +52,15 @@ ah_conn_in_get(const struct sk_buff *skb if (likely(!inverse)) { cp = ip_vs_conn_in_get(IPPROTO_UDP, iph->saddr, - __constant_htons(PORT_ISAKMP), + htons(PORT_ISAKMP), iph->daddr, - __constant_htons(PORT_ISAKMP)); + htons(PORT_ISAKMP)); } else { cp = ip_vs_conn_in_get(IPPROTO_UDP, iph->daddr, - __constant_htons(PORT_ISAKMP), + htons(PORT_ISAKMP), iph->saddr, - __constant_htons(PORT_ISAKMP)); + htons(PORT_ISAKMP)); } if (!cp) { @@ -89,15 +89,15 @@ ah_conn_out_get(const struct sk_buff *sk if (likely(!inverse)) { cp = ip_vs_conn_out_get(IPPROTO_UDP, iph->saddr, - __constant_htons(PORT_ISAKMP), + htons(PORT_ISAKMP), iph->daddr, - __constant_htons(PORT_ISAKMP)); + htons(PORT_ISAKMP)); } else { cp = ip_vs_conn_out_get(IPPROTO_UDP, iph->daddr, - __constant_htons(PORT_ISAKMP), + htons(PORT_ISAKMP), iph->saddr, - __constant_htons(PORT_ISAKMP)); + htons(PORT_ISAKMP)); } if (!cp) { diff -puN net/ipv4/ipvs/ip_vs_proto_tcp.c~git-net net/ipv4/ipvs/ip_vs_proto_tcp.c --- a/net/ipv4/ipvs/ip_vs_proto_tcp.c~git-net +++ a/net/ipv4/ipvs/ip_vs_proto_tcp.c @@ -76,16 +76,15 @@ tcp_conn_schedule(struct sk_buff *skb, struct ip_vs_service *svc; struct tcphdr _tcph, *th; - th = skb_header_pointer(skb, skb->nh.iph->ihl*4, - sizeof(_tcph), &_tcph); + th = skb_header_pointer(skb, ip_hdrlen(skb), sizeof(_tcph), &_tcph); if (th == NULL) { *verdict = NF_DROP; return 0; } if (th->syn && - (svc = ip_vs_service_get(skb->mark, skb->nh.iph->protocol, - skb->nh.iph->daddr, th->dest))) { + (svc = ip_vs_service_get(skb->mark, ip_hdr(skb)->protocol, + ip_hdr(skb)->daddr, th->dest))) { if (ip_vs_todrop()) { /* * It seems that we are very loaded. @@ -127,7 +126,7 @@ tcp_snat_handler(struct sk_buff **pskb, struct ip_vs_protocol *pp, struct ip_vs_conn *cp) { struct tcphdr *tcph; - unsigned int tcphoff = (*pskb)->nh.iph->ihl * 4; + const unsigned int tcphoff = ip_hdrlen(*pskb); /* csum_check requires unshared skb */ if (!ip_vs_make_skb_writable(pskb, tcphoff+sizeof(*tcph))) @@ -143,7 +142,7 @@ tcp_snat_handler(struct sk_buff **pskb, return 0; } - tcph = (void *)(*pskb)->nh.iph + tcphoff; + tcph = (void *)ip_hdr(*pskb) + tcphoff; tcph->source = cp->vport; /* Adjust TCP checksums */ @@ -175,7 +174,7 @@ tcp_dnat_handler(struct sk_buff **pskb, struct ip_vs_protocol *pp, struct ip_vs_conn *cp) { struct tcphdr *tcph; - unsigned int tcphoff = (*pskb)->nh.iph->ihl * 4; + const unsigned int tcphoff = ip_hdrlen(*pskb); /* csum_check requires unshared skb */ if (!ip_vs_make_skb_writable(pskb, tcphoff+sizeof(*tcph))) @@ -194,7 +193,7 @@ tcp_dnat_handler(struct sk_buff **pskb, return 0; } - tcph = (void *)(*pskb)->nh.iph + tcphoff; + tcph = (void *)ip_hdr(*pskb) + tcphoff; tcph->dest = cp->dport; /* @@ -224,15 +223,15 @@ tcp_dnat_handler(struct sk_buff **pskb, static int tcp_csum_check(struct sk_buff *skb, struct ip_vs_protocol *pp) { - unsigned int tcphoff = skb->nh.iph->ihl*4; + const unsigned int tcphoff = ip_hdrlen(skb); switch (skb->ip_summed) { case CHECKSUM_NONE: skb->csum = skb_checksum(skb, tcphoff, skb->len - tcphoff, 0); case CHECKSUM_COMPLETE: - if (csum_tcpudp_magic(skb->nh.iph->saddr, skb->nh.iph->daddr, + if (csum_tcpudp_magic(ip_hdr(skb)->saddr, ip_hdr(skb)->daddr, skb->len - tcphoff, - skb->nh.iph->protocol, skb->csum)) { + ip_hdr(skb)->protocol, skb->csum)) { IP_VS_DBG_RL_PKT(0, pp, skb, 0, "Failed checksum for"); return 0; @@ -467,8 +466,7 @@ tcp_state_transition(struct ip_vs_conn * { struct tcphdr _tcph, *th; - th = skb_header_pointer(skb, skb->nh.iph->ihl*4, - sizeof(_tcph), &_tcph); + th = skb_header_pointer(skb, ip_hdrlen(skb), sizeof(_tcph), &_tcph); if (th == NULL) return 0; diff -puN net/ipv4/ipvs/ip_vs_proto_udp.c~git-net net/ipv4/ipvs/ip_vs_proto_udp.c --- a/net/ipv4/ipvs/ip_vs_proto_udp.c~git-net +++ a/net/ipv4/ipvs/ip_vs_proto_udp.c @@ -22,7 +22,7 @@ #include #include - +#include static struct ip_vs_conn * udp_conn_in_get(const struct sk_buff *skb, struct ip_vs_protocol *pp, @@ -56,7 +56,7 @@ udp_conn_out_get(const struct sk_buff *s struct ip_vs_conn *cp; __be16 _ports[2], *pptr; - pptr = skb_header_pointer(skb, skb->nh.iph->ihl*4, + pptr = skb_header_pointer(skb, ip_hdrlen(skb), sizeof(_ports), _ports); if (pptr == NULL) return NULL; @@ -82,15 +82,15 @@ udp_conn_schedule(struct sk_buff *skb, s struct ip_vs_service *svc; struct udphdr _udph, *uh; - uh = skb_header_pointer(skb, skb->nh.iph->ihl*4, + uh = skb_header_pointer(skb, ip_hdrlen(skb), sizeof(_udph), &_udph); if (uh == NULL) { *verdict = NF_DROP; return 0; } - if ((svc = ip_vs_service_get(skb->mark, skb->nh.iph->protocol, - skb->nh.iph->daddr, uh->dest))) { + if ((svc = ip_vs_service_get(skb->mark, ip_hdr(skb)->protocol, + ip_hdr(skb)->daddr, uh->dest))) { if (ip_vs_todrop()) { /* * It seems that we are very loaded. @@ -133,7 +133,7 @@ udp_snat_handler(struct sk_buff **pskb, struct ip_vs_protocol *pp, struct ip_vs_conn *cp) { struct udphdr *udph; - unsigned int udphoff = (*pskb)->nh.iph->ihl * 4; + const unsigned int udphoff = ip_hdrlen(*pskb); /* csum_check requires unshared skb */ if (!ip_vs_make_skb_writable(pskb, udphoff+sizeof(*udph))) @@ -151,7 +151,7 @@ udp_snat_handler(struct sk_buff **pskb, return 0; } - udph = (void *)(*pskb)->nh.iph + udphoff; + udph = (void *)ip_hdr(*pskb) + udphoff; udph->source = cp->vport; /* @@ -187,7 +187,7 @@ udp_dnat_handler(struct sk_buff **pskb, struct ip_vs_protocol *pp, struct ip_vs_conn *cp) { struct udphdr *udph; - unsigned int udphoff = (*pskb)->nh.iph->ihl * 4; + unsigned int udphoff = ip_hdrlen(*pskb); /* csum_check requires unshared skb */ if (!ip_vs_make_skb_writable(pskb, udphoff+sizeof(*udph))) @@ -206,7 +206,7 @@ udp_dnat_handler(struct sk_buff **pskb, return 0; } - udph = (void *)(*pskb)->nh.iph + udphoff; + udph = (void *)ip_hdr(*pskb) + udphoff; udph->dest = cp->dport; /* @@ -239,7 +239,7 @@ static int udp_csum_check(struct sk_buff *skb, struct ip_vs_protocol *pp) { struct udphdr _udph, *uh; - unsigned int udphoff = skb->nh.iph->ihl*4; + const unsigned int udphoff = ip_hdrlen(skb); uh = skb_header_pointer(skb, udphoff, sizeof(_udph), &_udph); if (uh == NULL) @@ -251,10 +251,10 @@ udp_csum_check(struct sk_buff *skb, stru skb->csum = skb_checksum(skb, udphoff, skb->len - udphoff, 0); case CHECKSUM_COMPLETE: - if (csum_tcpudp_magic(skb->nh.iph->saddr, - skb->nh.iph->daddr, + if (csum_tcpudp_magic(ip_hdr(skb)->saddr, + ip_hdr(skb)->daddr, skb->len - udphoff, - skb->nh.iph->protocol, + ip_hdr(skb)->protocol, skb->csum)) { IP_VS_DBG_RL_PKT(0, pp, skb, 0, "Failed checksum for"); diff -puN net/ipv4/ipvs/ip_vs_sh.c~git-net net/ipv4/ipvs/ip_vs_sh.c --- a/net/ipv4/ipvs/ip_vs_sh.c~git-net +++ a/net/ipv4/ipvs/ip_vs_sh.c @@ -201,7 +201,7 @@ ip_vs_sh_schedule(struct ip_vs_service * { struct ip_vs_dest *dest; struct ip_vs_sh_bucket *tbl; - struct iphdr *iph = skb->nh.iph; + struct iphdr *iph = ip_hdr(skb); IP_VS_DBG(6, "ip_vs_sh_schedule(): Scheduling...\n"); diff -puN net/ipv4/ipvs/ip_vs_xmit.c~git-net net/ipv4/ipvs/ip_vs_xmit.c --- a/net/ipv4/ipvs/ip_vs_xmit.c~git-net +++ a/net/ipv4/ipvs/ip_vs_xmit.c @@ -156,7 +156,7 @@ ip_vs_bypass_xmit(struct sk_buff *skb, s struct ip_vs_protocol *pp) { struct rtable *rt; /* Route to the other host */ - struct iphdr *iph = skb->nh.iph; + struct iphdr *iph = ip_hdr(skb); u8 tos = iph->tos; int mtu; struct flowi fl = { @@ -178,7 +178,7 @@ ip_vs_bypass_xmit(struct sk_buff *skb, s /* MTU checking */ mtu = dst_mtu(&rt->u.dst); - if ((skb->len > mtu) && (iph->frag_off&__constant_htons(IP_DF))) { + if ((skb->len > mtu) && (iph->frag_off & htons(IP_DF))) { ip_rt_put(rt); icmp_send(skb, ICMP_DEST_UNREACH,ICMP_FRAG_NEEDED, htonl(mtu)); IP_VS_DBG_RL("ip_vs_bypass_xmit(): frag needed\n"); @@ -193,7 +193,7 @@ ip_vs_bypass_xmit(struct sk_buff *skb, s ip_rt_put(rt); return NF_STOLEN; } - ip_send_check(skb->nh.iph); + ip_send_check(ip_hdr(skb)); /* drop old route */ dst_release(skb->dst); @@ -226,7 +226,7 @@ ip_vs_nat_xmit(struct sk_buff *skb, stru { struct rtable *rt; /* Route to the other host */ int mtu; - struct iphdr *iph = skb->nh.iph; + struct iphdr *iph = ip_hdr(skb); EnterFunction(10); @@ -245,7 +245,7 @@ ip_vs_nat_xmit(struct sk_buff *skb, stru /* MTU checking */ mtu = dst_mtu(&rt->u.dst); - if ((skb->len > mtu) && (iph->frag_off&__constant_htons(IP_DF))) { + if ((skb->len > mtu) && (iph->frag_off & htons(IP_DF))) { ip_rt_put(rt); icmp_send(skb, ICMP_DEST_UNREACH,ICMP_FRAG_NEEDED, htonl(mtu)); IP_VS_DBG_RL_PKT(0, pp, skb, 0, "ip_vs_nat_xmit(): frag needed for"); @@ -266,8 +266,8 @@ ip_vs_nat_xmit(struct sk_buff *skb, stru /* mangle the packet */ if (pp->dnat_handler && !pp->dnat_handler(&skb, pp, cp)) goto tx_error; - skb->nh.iph->daddr = cp->daddr; - ip_send_check(skb->nh.iph); + ip_hdr(skb)->daddr = cp->daddr; + ip_send_check(ip_hdr(skb)); IP_VS_DBG_PKT(10, pp, skb, 0, "After DNAT"); @@ -320,19 +320,20 @@ ip_vs_tunnel_xmit(struct sk_buff *skb, s { struct rtable *rt; /* Route to the other host */ struct net_device *tdev; /* Device to other host */ - struct iphdr *old_iph = skb->nh.iph; + struct iphdr *old_iph = ip_hdr(skb); u8 tos = old_iph->tos; __be16 df = old_iph->frag_off; + sk_buff_data_t old_transport_header = skb->transport_header; struct iphdr *iph; /* Our new IP header */ int max_headroom; /* The extra header space needed */ int mtu; EnterFunction(10); - if (skb->protocol != __constant_htons(ETH_P_IP)) { + if (skb->protocol != htons(ETH_P_IP)) { IP_VS_DBG_RL("ip_vs_tunnel_xmit(): protocol error, " "ETH_P_IP: %d, skb protocol: %d\n", - __constant_htons(ETH_P_IP), skb->protocol); + htons(ETH_P_IP), skb->protocol); goto tx_error; } @@ -350,9 +351,9 @@ ip_vs_tunnel_xmit(struct sk_buff *skb, s if (skb->dst) skb->dst->ops->update_pmtu(skb->dst, mtu); - df |= (old_iph->frag_off&__constant_htons(IP_DF)); + df |= (old_iph->frag_off & htons(IP_DF)); - if ((old_iph->frag_off&__constant_htons(IP_DF)) + if ((old_iph->frag_off & htons(IP_DF)) && mtu < ntohs(old_iph->tot_len)) { icmp_send(skb, ICMP_DEST_UNREACH,ICMP_FRAG_NEEDED, htonl(mtu)); ip_rt_put(rt); @@ -377,15 +378,16 @@ ip_vs_tunnel_xmit(struct sk_buff *skb, s } kfree_skb(skb); skb = new_skb; - old_iph = skb->nh.iph; + old_iph = ip_hdr(skb); } - skb->h.raw = (void *) old_iph; + skb->transport_header = old_transport_header; /* fix old IP header checksum */ ip_send_check(old_iph); - skb->nh.raw = skb_push(skb, sizeof(struct iphdr)); + skb_push(skb, sizeof(struct iphdr)); + skb_reset_network_header(skb); memset(&(IPCB(skb)->opt), 0, sizeof(IPCB(skb)->opt)); /* drop old route */ @@ -395,7 +397,7 @@ ip_vs_tunnel_xmit(struct sk_buff *skb, s /* * Push down and install the IPIP header. */ - iph = skb->nh.iph; + iph = ip_hdr(skb); iph->version = 4; iph->ihl = sizeof(struct iphdr)>>2; iph->frag_off = df; @@ -435,7 +437,7 @@ ip_vs_dr_xmit(struct sk_buff *skb, struc struct ip_vs_protocol *pp) { struct rtable *rt; /* Route to the other host */ - struct iphdr *iph = skb->nh.iph; + struct iphdr *iph = ip_hdr(skb); int mtu; EnterFunction(10); @@ -445,7 +447,7 @@ ip_vs_dr_xmit(struct sk_buff *skb, struc /* MTU checking */ mtu = dst_mtu(&rt->u.dst); - if ((iph->frag_off&__constant_htons(IP_DF)) && skb->len > mtu) { + if ((iph->frag_off & htons(IP_DF)) && skb->len > mtu) { icmp_send(skb, ICMP_DEST_UNREACH,ICMP_FRAG_NEEDED, htonl(mtu)); ip_rt_put(rt); IP_VS_DBG_RL("ip_vs_dr_xmit(): frag needed\n"); @@ -460,7 +462,7 @@ ip_vs_dr_xmit(struct sk_buff *skb, struc ip_rt_put(rt); return NF_STOLEN; } - ip_send_check(skb->nh.iph); + ip_send_check(ip_hdr(skb)); /* drop old route */ dst_release(skb->dst); @@ -514,12 +516,12 @@ ip_vs_icmp_xmit(struct sk_buff *skb, str * mangle and send the packet here (only for VS/NAT) */ - if (!(rt = __ip_vs_get_out_rt(cp, RT_TOS(skb->nh.iph->tos)))) + if (!(rt = __ip_vs_get_out_rt(cp, RT_TOS(ip_hdr(skb)->tos)))) goto tx_error_icmp; /* MTU checking */ mtu = dst_mtu(&rt->u.dst); - if ((skb->len > mtu) && (skb->nh.iph->frag_off&__constant_htons(IP_DF))) { + if ((skb->len > mtu) && (ip_hdr(skb)->frag_off & htons(IP_DF))) { ip_rt_put(rt); icmp_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED, htonl(mtu)); IP_VS_DBG_RL("ip_vs_in_icmp(): frag needed\n"); diff -puN net/ipv4/multipath_drr.c~git-net net/ipv4/multipath_drr.c --- a/net/ipv4/multipath_drr.c~git-net +++ a/net/ipv4/multipath_drr.c @@ -100,7 +100,7 @@ static int drr_dev_event(struct notifier spin_unlock_bh(&state_lock); break; - }; + } return NOTIFY_DONE; } diff -puN net/ipv4/netfilter.c~git-net net/ipv4/netfilter.c --- a/net/ipv4/netfilter.c~git-net +++ a/net/ipv4/netfilter.c @@ -10,7 +10,7 @@ /* route_me_harder function, used by iptable_nat, iptable_mangle + ip_queue */ int ip_route_me_harder(struct sk_buff **pskb, unsigned addr_type) { - struct iphdr *iph = (*pskb)->nh.iph; + const struct iphdr *iph = ip_hdr(*pskb); struct rtable *rt; struct flowi fl = {}; struct dst_entry *odst; @@ -142,7 +142,7 @@ static void nf_ip_saveroute(const struct struct ip_rt_info *rt_info = nf_info_reroute(info); if (info->hook == NF_IP_LOCAL_OUT) { - const struct iphdr *iph = skb->nh.iph; + const struct iphdr *iph = ip_hdr(skb); rt_info->tos = iph->tos; rt_info->daddr = iph->daddr; @@ -155,7 +155,7 @@ static int nf_ip_reroute(struct sk_buff const struct ip_rt_info *rt_info = nf_info_reroute(info); if (info->hook == NF_IP_LOCAL_OUT) { - struct iphdr *iph = (*pskb)->nh.iph; + const struct iphdr *iph = ip_hdr(*pskb); if (!(iph->tos == rt_info->tos && iph->daddr == rt_info->daddr @@ -168,7 +168,7 @@ static int nf_ip_reroute(struct sk_buff __sum16 nf_ip_checksum(struct sk_buff *skb, unsigned int hook, unsigned int dataoff, u_int8_t protocol) { - struct iphdr *iph = skb->nh.iph; + const struct iphdr *iph = ip_hdr(skb); __sum16 csum = 0; switch (skb->ip_summed) { diff -puN net/ipv4/netfilter/Kconfig~git-net net/ipv4/netfilter/Kconfig --- a/net/ipv4/netfilter/Kconfig~git-net +++ a/net/ipv4/netfilter/Kconfig @@ -30,188 +30,6 @@ config NF_CONNTRACK_PROC_COMPAT If unsure, say Y. -# connection tracking, helpers and protocols -config IP_NF_CT_ACCT - bool "Connection tracking flow accounting" - depends on IP_NF_CONNTRACK - help - If this option is enabled, the connection tracking code will - keep per-flow packet and byte counters. - - Those counters can be used for flow-based accounting or the - `connbytes' match. - - If unsure, say `N'. - -config IP_NF_CONNTRACK_MARK - bool 'Connection mark tracking support' - depends on IP_NF_CONNTRACK - help - This option enables support for connection marks, used by the - `CONNMARK' target and `connmark' match. Similar to the mark value - of packets, but this mark value is kept in the conntrack session - instead of the individual packets. - -config IP_NF_CONNTRACK_SECMARK - bool 'Connection tracking security mark support' - depends on IP_NF_CONNTRACK && NETWORK_SECMARK - help - This option enables security markings to be applied to - connections. Typically they are copied to connections from - packets using the CONNSECMARK target and copied back from - connections to packets with the same target, with the packets - being originally labeled via SECMARK. - - If unsure, say 'N'. - -config IP_NF_CONNTRACK_EVENTS - bool "Connection tracking events (EXPERIMENTAL)" - depends on EXPERIMENTAL && IP_NF_CONNTRACK - help - If this option is enabled, the connection tracking code will - provide a notifier chain that can be used by other kernel code - to get notified about changes in the connection tracking state. - - IF unsure, say `N'. - -config IP_NF_CONNTRACK_NETLINK - tristate 'Connection tracking netlink interface (EXPERIMENTAL)' - depends on EXPERIMENTAL && IP_NF_CONNTRACK && NETFILTER_NETLINK - depends on IP_NF_CONNTRACK!=y || NETFILTER_NETLINK!=m - depends on IP_NF_NAT=n || IP_NF_NAT - help - This option enables support for a netlink-based userspace interface - - -config IP_NF_CT_PROTO_SCTP - tristate 'SCTP protocol connection tracking support (EXPERIMENTAL)' - depends on IP_NF_CONNTRACK && EXPERIMENTAL - help - With this option enabled, the connection tracking code will - be able to do state tracking on SCTP connections. - - If you want to compile it as a module, say M here and read - . If unsure, say `N'. - -config IP_NF_FTP - tristate "FTP protocol support" - depends on IP_NF_CONNTRACK - help - Tracking FTP connections is problematic: special helpers are - required for tracking them, and doing masquerading and other forms - of Network Address Translation on them. - - To compile it as a module, choose M here. If unsure, say Y. - -config IP_NF_IRC - tristate "IRC protocol support" - depends on IP_NF_CONNTRACK - ---help--- - There is a commonly-used extension to IRC called - Direct Client-to-Client Protocol (DCC). This enables users to send - files to each other, and also chat to each other without the need - of a server. DCC Sending is used anywhere you send files over IRC, - and DCC Chat is most commonly used by Eggdrop bots. If you are - using NAT, this extension will enable you to send files and initiate - chats. Note that you do NOT need this extension to get files or - have others initiate chats, or everything else in IRC. - - To compile it as a module, choose M here. If unsure, say Y. - -config IP_NF_NETBIOS_NS - tristate "NetBIOS name service protocol support (EXPERIMENTAL)" - depends on IP_NF_CONNTRACK && EXPERIMENTAL - help - NetBIOS name service requests are sent as broadcast messages from an - unprivileged port and responded to with unicast messages to the - same port. This make them hard to firewall properly because connection - tracking doesn't deal with broadcasts. This helper tracks locally - originating NetBIOS name service requests and the corresponding - responses. It relies on correct IP address configuration, specifically - netmask and broadcast address. When properly configured, the output - of "ip address show" should look similar to this: - - $ ip -4 address show eth0 - 4: eth0: mtu 1500 qdisc pfifo_fast qlen 1000 - inet 172.16.2.252/24 brd 172.16.2.255 scope global eth0 - - To compile it as a module, choose M here. If unsure, say N. - -config IP_NF_TFTP - tristate "TFTP protocol support" - depends on IP_NF_CONNTRACK - help - TFTP connection tracking helper, this is required depending - on how restrictive your ruleset is. - If you are using a tftp client behind -j SNAT or -j MASQUERADING - you will need this. - - To compile it as a module, choose M here. If unsure, say Y. - -config IP_NF_AMANDA - tristate "Amanda backup protocol support" - depends on IP_NF_CONNTRACK - select TEXTSEARCH - select TEXTSEARCH_KMP - help - If you are running the Amanda backup package - on this machine or machines that will be MASQUERADED through this - machine, then you may want to enable this feature. This allows the - connection tracking and natting code to allow the sub-channels that - Amanda requires for communication of the backup data, messages and - index. - - To compile it as a module, choose M here. If unsure, say Y. - -config IP_NF_PPTP - tristate 'PPTP protocol support' - depends on IP_NF_CONNTRACK - help - This module adds support for PPTP (Point to Point Tunnelling - Protocol, RFC2637) connection tracking and NAT. - - If you are running PPTP sessions over a stateful firewall or NAT - box, you may want to enable this feature. - - Please note that not all PPTP modes of operation are supported yet. - For more info, read top of the file - net/ipv4/netfilter/ip_conntrack_pptp.c - - If you want to compile it as a module, say M here and read - Documentation/modules.txt. If unsure, say `N'. - -config IP_NF_H323 - tristate 'H.323 protocol support (EXPERIMENTAL)' - depends on IP_NF_CONNTRACK && EXPERIMENTAL - help - H.323 is a VoIP signalling protocol from ITU-T. As one of the most - important VoIP protocols, it is widely used by voice hardware and - software including voice gateways, IP phones, Netmeeting, OpenPhone, - Gnomemeeting, etc. - - With this module you can support H.323 on a connection tracking/NAT - firewall. - - This module supports RAS, Fast Start, H.245 Tunnelling, Call - Forwarding, RTP/RTCP and T.120 based audio, video, fax, chat, - whiteboard, file transfer, etc. For more information, please - visit http://nath323.sourceforge.net/. - - If you want to compile it as a module, say 'M' here and read - Documentation/modules.txt. If unsure, say 'N'. - -config IP_NF_SIP - tristate "SIP protocol support (EXPERIMENTAL)" - depends on IP_NF_CONNTRACK && EXPERIMENTAL - help - SIP is an application-layer control protocol that can establish, - modify, and terminate multimedia sessions (conferences) such as - Internet telephony calls. With the ip_conntrack_sip and - the ip_nat_sip modules you can support the protocol on a connection - tracking/NATing firewall. - - To compile it as a module, choose M here. If unsure, say Y. - config IP_NF_QUEUE tristate "IP Userspace queueing via NETLINK (OBSOLETE)" help @@ -361,17 +179,6 @@ config IP_NF_TARGET_ULOG To compile it as a module, choose M here. If unsure, say N. -# NAT + specific targets: ip_conntrack -config IP_NF_NAT - tristate "Full NAT" - depends on IP_NF_IPTABLES && IP_NF_CONNTRACK - help - The Full NAT option allows masquerading, port forwarding and other - forms of full Network Address Port Translation. It is controlled by - the `nat' table in iptables: see the man page for iptables(8). - - To compile it as a module, choose M here. If unsure, say N. - # NAT + specific targets: nf_conntrack config NF_NAT tristate "Full NAT" @@ -383,11 +190,6 @@ config NF_NAT To compile it as a module, choose M here. If unsure, say N. -config IP_NF_NAT_NEEDED - bool - depends on IP_NF_NAT - default y - config NF_NAT_NEEDED bool depends on NF_NAT @@ -395,7 +197,7 @@ config NF_NAT_NEEDED config IP_NF_TARGET_MASQUERADE tristate "MASQUERADE target support" - depends on (NF_NAT || IP_NF_NAT) + depends on NF_NAT help Masquerading is a special case of NAT: all outgoing connections are changed to seem to come from a particular interface's address, and @@ -407,7 +209,7 @@ config IP_NF_TARGET_MASQUERADE config IP_NF_TARGET_REDIRECT tristate "REDIRECT target support" - depends on (NF_NAT || IP_NF_NAT) + depends on NF_NAT help REDIRECT is a special case of NAT: all incoming connections are mapped onto the incoming interface's address, causing the packets to @@ -418,7 +220,7 @@ config IP_NF_TARGET_REDIRECT config IP_NF_TARGET_NETMAP tristate "NETMAP target support" - depends on (NF_NAT || IP_NF_NAT) + depends on NF_NAT help NETMAP is an implementation of static 1:1 NAT mapping of network addresses. It maps the network address part, while keeping the host @@ -429,28 +231,13 @@ config IP_NF_TARGET_NETMAP config IP_NF_TARGET_SAME tristate "SAME target support" - depends on (NF_NAT || IP_NF_NAT) + depends on NF_NAT help This option adds a `SAME' target, which works like the standard SNAT target, but attempts to give clients the same IP for all connections. To compile it as a module, choose M here. If unsure, say N. -config IP_NF_NAT_SNMP_BASIC - tristate "Basic SNMP-ALG support (EXPERIMENTAL)" - depends on EXPERIMENTAL && IP_NF_NAT - ---help--- - - This module implements an Application Layer Gateway (ALG) for - SNMP payloads. In conjunction with NAT, it allows a network - management system to access multiple private networks with - conflicting addresses. It works by modifying IP addresses - inside SNMP payloads to match IP-layer NAT mapping. - - This is the "basic" form of SNMP-ALG, as described in RFC 2962 - - To compile it as a module, choose M here. If unsure, say N. - config NF_NAT_SNMP_BASIC tristate "Basic SNMP-ALG support (EXPERIMENTAL)" depends on EXPERIMENTAL && NF_NAT @@ -477,78 +264,37 @@ config NF_NAT_PROTO_GRE tristate depends on NF_NAT && NF_CT_PROTO_GRE -config IP_NF_NAT_FTP - tristate - depends on IP_NF_IPTABLES && IP_NF_CONNTRACK && IP_NF_NAT - default IP_NF_NAT && IP_NF_FTP - config NF_NAT_FTP tristate depends on IP_NF_IPTABLES && NF_CONNTRACK && NF_NAT default NF_NAT && NF_CONNTRACK_FTP -config IP_NF_NAT_IRC - tristate - depends on IP_NF_IPTABLES!=n && IP_NF_CONNTRACK!=n && IP_NF_NAT!=n - default IP_NF_NAT if IP_NF_IRC=y - default m if IP_NF_IRC=m - config NF_NAT_IRC tristate depends on IP_NF_IPTABLES && NF_CONNTRACK && NF_NAT default NF_NAT && NF_CONNTRACK_IRC -config IP_NF_NAT_TFTP - tristate - depends on IP_NF_IPTABLES!=n && IP_NF_CONNTRACK!=n && IP_NF_NAT!=n - default IP_NF_NAT if IP_NF_TFTP=y - default m if IP_NF_TFTP=m - config NF_NAT_TFTP tristate depends on IP_NF_IPTABLES && NF_CONNTRACK && NF_NAT default NF_NAT && NF_CONNTRACK_TFTP -config IP_NF_NAT_AMANDA - tristate - depends on IP_NF_IPTABLES!=n && IP_NF_CONNTRACK!=n && IP_NF_NAT!=n - default IP_NF_NAT if IP_NF_AMANDA=y - default m if IP_NF_AMANDA=m - config NF_NAT_AMANDA tristate depends on IP_NF_IPTABLES && NF_CONNTRACK && NF_NAT default NF_NAT && NF_CONNTRACK_AMANDA -config IP_NF_NAT_PPTP - tristate - depends on IP_NF_NAT!=n && IP_NF_PPTP!=n - default IP_NF_NAT if IP_NF_PPTP=y - default m if IP_NF_PPTP=m - config NF_NAT_PPTP tristate depends on IP_NF_IPTABLES && NF_CONNTRACK && NF_NAT default NF_NAT && NF_CONNTRACK_PPTP select NF_NAT_PROTO_GRE -config IP_NF_NAT_H323 - tristate - depends on IP_NF_IPTABLES!=n && IP_NF_CONNTRACK!=n && IP_NF_NAT!=n - default IP_NF_NAT if IP_NF_H323=y - default m if IP_NF_H323=m - config NF_NAT_H323 tristate depends on IP_NF_IPTABLES && NF_CONNTRACK && NF_NAT default NF_NAT && NF_CONNTRACK_H323 -config IP_NF_NAT_SIP - tristate - depends on IP_NF_IPTABLES!=n && IP_NF_CONNTRACK!=n && IP_NF_NAT!=n - default IP_NF_NAT if IP_NF_SIP=y - default m if IP_NF_SIP=m - config NF_NAT_SIP tristate depends on IP_NF_IPTABLES && NF_CONNTRACK && NF_NAT @@ -606,9 +352,8 @@ config IP_NF_TARGET_TTL config IP_NF_TARGET_CLUSTERIP tristate "CLUSTERIP target support (EXPERIMENTAL)" depends on IP_NF_MANGLE && EXPERIMENTAL - depends on IP_NF_CONNTRACK || NF_CONNTRACK_IPV4 - select IP_NF_CONNTRACK_MARK if IP_NF_CONNTRACK - select NF_CONNTRACK_MARK if NF_CONNTRACK_IPV4 + depends on NF_CONNTRACK_IPV4 + select NF_CONNTRACK_MARK help The CLUSTERIP target allows you to build load-balancing clusters of network servers without having a dedicated load-balancing diff -puN net/ipv4/netfilter/Makefile~git-net net/ipv4/netfilter/Makefile --- a/net/ipv4/netfilter/Makefile~git-net +++ a/net/ipv4/netfilter/Makefile @@ -2,8 +2,6 @@ # Makefile for the netfilter modules on top of IPv4. # -# objects for the standalone - connection tracking / NAT -ip_conntrack-objs := ip_conntrack_standalone.o ip_conntrack_core.o ip_conntrack_proto_generic.o ip_conntrack_proto_tcp.o ip_conntrack_proto_udp.o ip_conntrack_proto_icmp.o # objects for l3 independent conntrack nf_conntrack_ipv4-objs := nf_conntrack_l3proto_ipv4.o nf_conntrack_proto_icmp.o ifeq ($(CONFIG_NF_CONNTRACK_PROC_COMPAT),y) @@ -12,53 +10,14 @@ nf_conntrack_ipv4-objs += nf_conntrack_l endif endif -ip_nat-objs := ip_nat_core.o ip_nat_helper.o ip_nat_proto_unknown.o ip_nat_proto_tcp.o ip_nat_proto_udp.o ip_nat_proto_icmp.o -nf_nat-objs := nf_nat_core.o nf_nat_helper.o nf_nat_proto_unknown.o nf_nat_proto_tcp.o nf_nat_proto_udp.o nf_nat_proto_icmp.o -ifneq ($(CONFIG_NF_NAT),) +nf_nat-objs := nf_nat_core.o nf_nat_helper.o nf_nat_proto_unknown.o nf_nat_proto_tcp.o nf_nat_proto_udp.o nf_nat_proto_icmp.o iptable_nat-objs := nf_nat_rule.o nf_nat_standalone.o -else -iptable_nat-objs := ip_nat_rule.o ip_nat_standalone.o -endif - -ip_conntrack_pptp-objs := ip_conntrack_helper_pptp.o ip_conntrack_proto_gre.o -ip_nat_pptp-objs := ip_nat_helper_pptp.o ip_nat_proto_gre.o - -ip_conntrack_h323-objs := ip_conntrack_helper_h323.o ../../netfilter/nf_conntrack_h323_asn1.o -ip_nat_h323-objs := ip_nat_helper_h323.o # connection tracking -obj-$(CONFIG_IP_NF_CONNTRACK) += ip_conntrack.o obj-$(CONFIG_NF_CONNTRACK_IPV4) += nf_conntrack_ipv4.o -obj-$(CONFIG_IP_NF_NAT) += ip_nat.o obj-$(CONFIG_NF_NAT) += nf_nat.o -# conntrack netlink interface -obj-$(CONFIG_IP_NF_CONNTRACK_NETLINK) += ip_conntrack_netlink.o - - -# SCTP protocol connection tracking -obj-$(CONFIG_IP_NF_CT_PROTO_SCTP) += ip_conntrack_proto_sctp.o - -# connection tracking helpers -obj-$(CONFIG_IP_NF_H323) += ip_conntrack_h323.o -obj-$(CONFIG_IP_NF_PPTP) += ip_conntrack_pptp.o -obj-$(CONFIG_IP_NF_AMANDA) += ip_conntrack_amanda.o -obj-$(CONFIG_IP_NF_TFTP) += ip_conntrack_tftp.o -obj-$(CONFIG_IP_NF_FTP) += ip_conntrack_ftp.o -obj-$(CONFIG_IP_NF_IRC) += ip_conntrack_irc.o -obj-$(CONFIG_IP_NF_SIP) += ip_conntrack_sip.o -obj-$(CONFIG_IP_NF_NETBIOS_NS) += ip_conntrack_netbios_ns.o - -# NAT helpers (ip_conntrack) -obj-$(CONFIG_IP_NF_NAT_H323) += ip_nat_h323.o -obj-$(CONFIG_IP_NF_NAT_PPTP) += ip_nat_pptp.o -obj-$(CONFIG_IP_NF_NAT_AMANDA) += ip_nat_amanda.o -obj-$(CONFIG_IP_NF_NAT_TFTP) += ip_nat_tftp.o -obj-$(CONFIG_IP_NF_NAT_FTP) += ip_nat_ftp.o -obj-$(CONFIG_IP_NF_NAT_IRC) += ip_nat_irc.o -obj-$(CONFIG_IP_NF_NAT_SIP) += ip_nat_sip.o - # NAT helpers (nf_conntrack) obj-$(CONFIG_NF_NAT_AMANDA) += nf_nat_amanda.o obj-$(CONFIG_NF_NAT_FTP) += nf_nat_ftp.o @@ -78,7 +37,6 @@ obj-$(CONFIG_IP_NF_IPTABLES) += ip_table # the three instances of ip_tables obj-$(CONFIG_IP_NF_FILTER) += iptable_filter.o obj-$(CONFIG_IP_NF_MANGLE) += iptable_mangle.o -obj-$(CONFIG_IP_NF_NAT) += iptable_nat.o obj-$(CONFIG_NF_NAT) += iptable_nat.o obj-$(CONFIG_IP_NF_RAW) += iptable_raw.o @@ -100,7 +58,6 @@ obj-$(CONFIG_IP_NF_TARGET_MASQUERADE) += obj-$(CONFIG_IP_NF_TARGET_REDIRECT) += ipt_REDIRECT.o obj-$(CONFIG_IP_NF_TARGET_NETMAP) += ipt_NETMAP.o obj-$(CONFIG_IP_NF_TARGET_SAME) += ipt_SAME.o -obj-$(CONFIG_IP_NF_NAT_SNMP_BASIC) += ip_nat_snmp_basic.o obj-$(CONFIG_IP_NF_TARGET_LOG) += ipt_LOG.o obj-$(CONFIG_IP_NF_TARGET_ULOG) += ipt_ULOG.o obj-$(CONFIG_IP_NF_TARGET_CLUSTERIP) += ipt_CLUSTERIP.o diff -puN net/ipv4/netfilter/arp_tables.c~git-net net/ipv4/netfilter/arp_tables.c --- a/net/ipv4/netfilter/arp_tables.c~git-net +++ a/net/ipv4/netfilter/arp_tables.c @@ -245,7 +245,7 @@ unsigned int arpt_do_table(struct sk_buf e = get_entry(table_base, private->hook_entry[hook]); back = get_entry(table_base, private->underflow[hook]); - arp = (*pskb)->nh.arph; + arp = arp_hdr(*pskb); do { if (arp_packet_match(arp, (*pskb)->dev, indev, outdev, &e->arp)) { struct arpt_entry_target *t; @@ -297,7 +297,7 @@ unsigned int arpt_do_table(struct sk_buf t->data); /* Target might have changed stuff. */ - arp = (*pskb)->nh.arph; + arp = arp_hdr(*pskb); if (verdict == ARPT_CONTINUE) e = (void *)e + e->next_offset; diff -puN net/ipv4/netfilter/arpt_mangle.c~git-net net/ipv4/netfilter/arpt_mangle.c --- a/net/ipv4/netfilter/arpt_mangle.c~git-net +++ a/net/ipv4/netfilter/arpt_mangle.c @@ -30,35 +30,35 @@ target(struct sk_buff **pskb, *pskb = nskb; } - arp = (*pskb)->nh.arph; - arpptr = (*pskb)->nh.raw + sizeof(*arp); + arp = arp_hdr(*pskb); + arpptr = skb_network_header(*pskb) + sizeof(*arp); pln = arp->ar_pln; hln = arp->ar_hln; /* We assume that pln and hln were checked in the match */ if (mangle->flags & ARPT_MANGLE_SDEV) { if (ARPT_DEV_ADDR_LEN_MAX < hln || - (arpptr + hln > (**pskb).tail)) + (arpptr + hln > skb_tail_pointer(*pskb))) return NF_DROP; memcpy(arpptr, mangle->src_devaddr, hln); } arpptr += hln; if (mangle->flags & ARPT_MANGLE_SIP) { if (ARPT_MANGLE_ADDR_LEN_MAX < pln || - (arpptr + pln > (**pskb).tail)) + (arpptr + pln > skb_tail_pointer(*pskb))) return NF_DROP; memcpy(arpptr, &mangle->u_s.src_ip, pln); } arpptr += pln; if (mangle->flags & ARPT_MANGLE_TDEV) { if (ARPT_DEV_ADDR_LEN_MAX < hln || - (arpptr + hln > (**pskb).tail)) + (arpptr + hln > skb_tail_pointer(*pskb))) return NF_DROP; memcpy(arpptr, mangle->tgt_devaddr, hln); } arpptr += hln; if (mangle->flags & ARPT_MANGLE_TIP) { if (ARPT_MANGLE_ADDR_LEN_MAX < pln || - (arpptr + pln > (**pskb).tail)) + (arpptr + pln > skb_tail_pointer(*pskb))) return NF_DROP; memcpy(arpptr, &mangle->u_t.tgt_ip, pln); } diff -puN net/ipv4/netfilter/ip_conntrack_amanda.c~git-net /dev/null --- a/net/ipv4/netfilter/ip_conntrack_amanda.c +++ /dev/null @@ -1,229 +0,0 @@ -/* Amanda extension for IP connection tracking, Version 0.2 - * (C) 2002 by Brian J. Murrell - * based on HW's ip_conntrack_irc.c as well as other modules - * - * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License - * as published by the Free Software Foundation; either version - * 2 of the License, or (at your option) any later version. - * - * Module load syntax: - * insmod ip_conntrack_amanda.o [master_timeout=n] - * - * Where master_timeout is the timeout (in seconds) of the master - * connection (port 10080). This defaults to 5 minutes but if - * your clients take longer than 5 minutes to do their work - * before getting back to the Amanda server, you can increase - * this value. - * - */ -#include -#include -#include -#include -#include -#include -#include -#include - -#include -#include -#include - -static unsigned int master_timeout = 300; -static char *ts_algo = "kmp"; - -MODULE_AUTHOR("Brian J. Murrell "); -MODULE_DESCRIPTION("Amanda connection tracking module"); -MODULE_LICENSE("GPL"); -module_param(master_timeout, uint, 0600); -MODULE_PARM_DESC(master_timeout, "timeout for the master connection"); -module_param(ts_algo, charp, 0400); -MODULE_PARM_DESC(ts_algo, "textsearch algorithm to use (default kmp)"); - -unsigned int (*ip_nat_amanda_hook)(struct sk_buff **pskb, - enum ip_conntrack_info ctinfo, - unsigned int matchoff, - unsigned int matchlen, - struct ip_conntrack_expect *exp); -EXPORT_SYMBOL_GPL(ip_nat_amanda_hook); - -enum amanda_strings { - SEARCH_CONNECT, - SEARCH_NEWLINE, - SEARCH_DATA, - SEARCH_MESG, - SEARCH_INDEX, -}; - -static struct { - char *string; - size_t len; - struct ts_config *ts; -} search[] = { - [SEARCH_CONNECT] = { - .string = "CONNECT ", - .len = 8, - }, - [SEARCH_NEWLINE] = { - .string = "\n", - .len = 1, - }, - [SEARCH_DATA] = { - .string = "DATA ", - .len = 5, - }, - [SEARCH_MESG] = { - .string = "MESG ", - .len = 5, - }, - [SEARCH_INDEX] = { - .string = "INDEX ", - .len = 6, - }, -}; - -static int help(struct sk_buff **pskb, - struct ip_conntrack *ct, enum ip_conntrack_info ctinfo) -{ - struct ts_state ts; - struct ip_conntrack_expect *exp; - unsigned int dataoff, start, stop, off, i; - char pbuf[sizeof("65535")], *tmp; - u_int16_t port, len; - int ret = NF_ACCEPT; - typeof(ip_nat_amanda_hook) ip_nat_amanda; - - /* Only look at packets from the Amanda server */ - if (CTINFO2DIR(ctinfo) == IP_CT_DIR_ORIGINAL) - return NF_ACCEPT; - - /* increase the UDP timeout of the master connection as replies from - * Amanda clients to the server can be quite delayed */ - ip_ct_refresh(ct, *pskb, master_timeout * HZ); - - /* No data? */ - dataoff = (*pskb)->nh.iph->ihl*4 + sizeof(struct udphdr); - if (dataoff >= (*pskb)->len) { - if (net_ratelimit()) - printk("amanda_help: skblen = %u\n", (*pskb)->len); - return NF_ACCEPT; - } - - memset(&ts, 0, sizeof(ts)); - start = skb_find_text(*pskb, dataoff, (*pskb)->len, - search[SEARCH_CONNECT].ts, &ts); - if (start == UINT_MAX) - goto out; - start += dataoff + search[SEARCH_CONNECT].len; - - memset(&ts, 0, sizeof(ts)); - stop = skb_find_text(*pskb, start, (*pskb)->len, - search[SEARCH_NEWLINE].ts, &ts); - if (stop == UINT_MAX) - goto out; - stop += start; - - for (i = SEARCH_DATA; i <= SEARCH_INDEX; i++) { - memset(&ts, 0, sizeof(ts)); - off = skb_find_text(*pskb, start, stop, search[i].ts, &ts); - if (off == UINT_MAX) - continue; - off += start + search[i].len; - - len = min_t(unsigned int, sizeof(pbuf) - 1, stop - off); - if (skb_copy_bits(*pskb, off, pbuf, len)) - break; - pbuf[len] = '\0'; - - port = simple_strtoul(pbuf, &tmp, 10); - len = tmp - pbuf; - if (port == 0 || len > 5) - break; - - exp = ip_conntrack_expect_alloc(ct); - if (exp == NULL) { - ret = NF_DROP; - goto out; - } - - exp->expectfn = NULL; - exp->flags = 0; - - exp->tuple.src.ip = ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.src.ip; - exp->tuple.src.u.tcp.port = 0; - exp->tuple.dst.ip = ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.dst.ip; - exp->tuple.dst.protonum = IPPROTO_TCP; - exp->tuple.dst.u.tcp.port = htons(port); - - exp->mask.src.ip = htonl(0xFFFFFFFF); - exp->mask.src.u.tcp.port = 0; - exp->mask.dst.ip = htonl(0xFFFFFFFF); - exp->mask.dst.protonum = 0xFF; - exp->mask.dst.u.tcp.port = htons(0xFFFF); - - /* RCU read locked by nf_hook_slow */ - ip_nat_amanda = rcu_dereference(ip_nat_amanda_hook); - if (ip_nat_amanda) - ret = ip_nat_amanda(pskb, ctinfo, off - dataoff, - len, exp); - else if (ip_conntrack_expect_related(exp) != 0) - ret = NF_DROP; - ip_conntrack_expect_put(exp); - } - -out: - return ret; -} - -static struct ip_conntrack_helper amanda_helper = { - .max_expected = 3, - .timeout = 180, - .me = THIS_MODULE, - .help = help, - .name = "amanda", - - .tuple = { .src = { .u = { .udp = {.port = __constant_htons(10080) } } }, - .dst = { .protonum = IPPROTO_UDP }, - }, - .mask = { .src = { .u = { 0xFFFF } }, - .dst = { .protonum = 0xFF }, - }, -}; - -static void __exit ip_conntrack_amanda_fini(void) -{ - int i; - - ip_conntrack_helper_unregister(&amanda_helper); - for (i = 0; i < ARRAY_SIZE(search); i++) - textsearch_destroy(search[i].ts); -} - -static int __init ip_conntrack_amanda_init(void) -{ - int ret, i; - - ret = -ENOMEM; - for (i = 0; i < ARRAY_SIZE(search); i++) { - search[i].ts = textsearch_prepare(ts_algo, search[i].string, - search[i].len, - GFP_KERNEL, TS_AUTOLOAD); - if (search[i].ts == NULL) - goto err; - } - ret = ip_conntrack_helper_register(&amanda_helper); - if (ret < 0) - goto err; - return 0; - -err: - for (; i >= 0; i--) { - if (search[i].ts) - textsearch_destroy(search[i].ts); - } - return ret; -} - -module_init(ip_conntrack_amanda_init); -module_exit(ip_conntrack_amanda_fini); diff -puN net/ipv4/netfilter/ip_conntrack_core.c~git-net /dev/null --- a/net/ipv4/netfilter/ip_conntrack_core.c +++ /dev/null @@ -1,1550 +0,0 @@ -/* Connection state tracking for netfilter. This is separated from, - but required by, the NAT layer; it can also be used by an iptables - extension. */ - -/* (C) 1999-2001 Paul `Rusty' Russell - * (C) 2002-2004 Netfilter Core Team - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - * - * 23 Apr 2001: Harald Welte - * - new API and handling of conntrack/nat helpers - * - now capable of multiple expectations for one master - * 16 Jul 2002: Harald Welte - * - add usage/reference counts to ip_conntrack_expect - * - export ip_conntrack[_expect]_{find_get,put} functions - * */ - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -/* ip_conntrack_lock protects the main hash table, protocol/helper/expected - registrations, conntrack timers*/ -#include -#include -#include -#include - -#define IP_CONNTRACK_VERSION "2.4" - -#if 0 -#define DEBUGP printk -#else -#define DEBUGP(format, args...) -#endif - -DEFINE_RWLOCK(ip_conntrack_lock); - -/* ip_conntrack_standalone needs this */ -atomic_t ip_conntrack_count = ATOMIC_INIT(0); - -void (*ip_conntrack_destroyed)(struct ip_conntrack *conntrack) = NULL; -LIST_HEAD(ip_conntrack_expect_list); -struct ip_conntrack_protocol *ip_ct_protos[MAX_IP_CT_PROTO] __read_mostly; -static LIST_HEAD(helpers); -unsigned int ip_conntrack_htable_size __read_mostly = 0; -int ip_conntrack_max __read_mostly; -struct list_head *ip_conntrack_hash __read_mostly; -static struct kmem_cache *ip_conntrack_cachep __read_mostly; -static struct kmem_cache *ip_conntrack_expect_cachep __read_mostly; -struct ip_conntrack ip_conntrack_untracked; -unsigned int ip_ct_log_invalid __read_mostly; -static LIST_HEAD(unconfirmed); -static int ip_conntrack_vmalloc __read_mostly; - -static unsigned int ip_conntrack_next_id; -static unsigned int ip_conntrack_expect_next_id; -#ifdef CONFIG_IP_NF_CONNTRACK_EVENTS -ATOMIC_NOTIFIER_HEAD(ip_conntrack_chain); -ATOMIC_NOTIFIER_HEAD(ip_conntrack_expect_chain); - -DEFINE_PER_CPU(struct ip_conntrack_ecache, ip_conntrack_ecache); - -/* deliver cached events and clear cache entry - must be called with locally - * disabled softirqs */ -static inline void -__ip_ct_deliver_cached_events(struct ip_conntrack_ecache *ecache) -{ - DEBUGP("ecache: delivering events for %p\n", ecache->ct); - if (is_confirmed(ecache->ct) && !is_dying(ecache->ct) && ecache->events) - atomic_notifier_call_chain(&ip_conntrack_chain, ecache->events, - ecache->ct); - ecache->events = 0; - ip_conntrack_put(ecache->ct); - ecache->ct = NULL; -} - -/* Deliver all cached events for a particular conntrack. This is called - * by code prior to async packet handling or freeing the skb */ -void ip_ct_deliver_cached_events(const struct ip_conntrack *ct) -{ - struct ip_conntrack_ecache *ecache; - - local_bh_disable(); - ecache = &__get_cpu_var(ip_conntrack_ecache); - if (ecache->ct == ct) - __ip_ct_deliver_cached_events(ecache); - local_bh_enable(); -} - -void __ip_ct_event_cache_init(struct ip_conntrack *ct) -{ - struct ip_conntrack_ecache *ecache; - - /* take care of delivering potentially old events */ - ecache = &__get_cpu_var(ip_conntrack_ecache); - BUG_ON(ecache->ct == ct); - if (ecache->ct) - __ip_ct_deliver_cached_events(ecache); - /* initialize for this conntrack/packet */ - ecache->ct = ct; - nf_conntrack_get(&ct->ct_general); -} - -/* flush the event cache - touches other CPU's data and must not be called while - * packets are still passing through the code */ -static void ip_ct_event_cache_flush(void) -{ - struct ip_conntrack_ecache *ecache; - int cpu; - - for_each_possible_cpu(cpu) { - ecache = &per_cpu(ip_conntrack_ecache, cpu); - if (ecache->ct) - ip_conntrack_put(ecache->ct); - } -} -#else -static inline void ip_ct_event_cache_flush(void) {} -#endif /* CONFIG_IP_NF_CONNTRACK_EVENTS */ - -DEFINE_PER_CPU(struct ip_conntrack_stat, ip_conntrack_stat); - -static int ip_conntrack_hash_rnd_initted; -static unsigned int ip_conntrack_hash_rnd; - -static u_int32_t __hash_conntrack(const struct ip_conntrack_tuple *tuple, - unsigned int size, unsigned int rnd) -{ - return (jhash_3words((__force u32)tuple->src.ip, - ((__force u32)tuple->dst.ip ^ tuple->dst.protonum), - (tuple->src.u.all | (tuple->dst.u.all << 16)), - rnd) % size); -} - -static u_int32_t -hash_conntrack(const struct ip_conntrack_tuple *tuple) -{ - return __hash_conntrack(tuple, ip_conntrack_htable_size, - ip_conntrack_hash_rnd); -} - -int -ip_ct_get_tuple(const struct iphdr *iph, - const struct sk_buff *skb, - unsigned int dataoff, - struct ip_conntrack_tuple *tuple, - const struct ip_conntrack_protocol *protocol) -{ - /* Never happen */ - if (iph->frag_off & htons(IP_OFFSET)) { - printk("ip_conntrack_core: Frag of proto %u.\n", - iph->protocol); - return 0; - } - - tuple->src.ip = iph->saddr; - tuple->dst.ip = iph->daddr; - tuple->dst.protonum = iph->protocol; - tuple->dst.dir = IP_CT_DIR_ORIGINAL; - - return protocol->pkt_to_tuple(skb, dataoff, tuple); -} - -int -ip_ct_invert_tuple(struct ip_conntrack_tuple *inverse, - const struct ip_conntrack_tuple *orig, - const struct ip_conntrack_protocol *protocol) -{ - inverse->src.ip = orig->dst.ip; - inverse->dst.ip = orig->src.ip; - inverse->dst.protonum = orig->dst.protonum; - inverse->dst.dir = !orig->dst.dir; - - return protocol->invert_tuple(inverse, orig); -} - - -/* ip_conntrack_expect helper functions */ -void ip_ct_unlink_expect(struct ip_conntrack_expect *exp) -{ - IP_NF_ASSERT(!timer_pending(&exp->timeout)); - list_del(&exp->list); - CONNTRACK_STAT_INC(expect_delete); - exp->master->expecting--; - ip_conntrack_expect_put(exp); -} - -static void expectation_timed_out(unsigned long ul_expect) -{ - struct ip_conntrack_expect *exp = (void *)ul_expect; - - write_lock_bh(&ip_conntrack_lock); - ip_ct_unlink_expect(exp); - write_unlock_bh(&ip_conntrack_lock); - ip_conntrack_expect_put(exp); -} - -struct ip_conntrack_expect * -__ip_conntrack_expect_find(const struct ip_conntrack_tuple *tuple) -{ - struct ip_conntrack_expect *i; - - list_for_each_entry(i, &ip_conntrack_expect_list, list) { - if (ip_ct_tuple_mask_cmp(tuple, &i->tuple, &i->mask)) - return i; - } - return NULL; -} - -/* Just find a expectation corresponding to a tuple. */ -struct ip_conntrack_expect * -ip_conntrack_expect_find_get(const struct ip_conntrack_tuple *tuple) -{ - struct ip_conntrack_expect *i; - - read_lock_bh(&ip_conntrack_lock); - i = __ip_conntrack_expect_find(tuple); - if (i) - atomic_inc(&i->use); - read_unlock_bh(&ip_conntrack_lock); - - return i; -} - -/* If an expectation for this connection is found, it gets delete from - * global list then returned. */ -static struct ip_conntrack_expect * -find_expectation(const struct ip_conntrack_tuple *tuple) -{ - struct ip_conntrack_expect *i; - - list_for_each_entry(i, &ip_conntrack_expect_list, list) { - /* If master is not in hash table yet (ie. packet hasn't left - this machine yet), how can other end know about expected? - Hence these are not the droids you are looking for (if - master ct never got confirmed, we'd hold a reference to it - and weird things would happen to future packets). */ - if (ip_ct_tuple_mask_cmp(tuple, &i->tuple, &i->mask) - && is_confirmed(i->master)) { - if (i->flags & IP_CT_EXPECT_PERMANENT) { - atomic_inc(&i->use); - return i; - } else if (del_timer(&i->timeout)) { - ip_ct_unlink_expect(i); - return i; - } - } - } - return NULL; -} - -/* delete all expectations for this conntrack */ -void ip_ct_remove_expectations(struct ip_conntrack *ct) -{ - struct ip_conntrack_expect *i, *tmp; - - /* Optimization: most connection never expect any others. */ - if (ct->expecting == 0) - return; - - list_for_each_entry_safe(i, tmp, &ip_conntrack_expect_list, list) { - if (i->master == ct && del_timer(&i->timeout)) { - ip_ct_unlink_expect(i); - ip_conntrack_expect_put(i); - } - } -} - -static void -clean_from_lists(struct ip_conntrack *ct) -{ - DEBUGP("clean_from_lists(%p)\n", ct); - list_del(&ct->tuplehash[IP_CT_DIR_ORIGINAL].list); - list_del(&ct->tuplehash[IP_CT_DIR_REPLY].list); - - /* Destroy all pending expectations */ - ip_ct_remove_expectations(ct); -} - -static void -destroy_conntrack(struct nf_conntrack *nfct) -{ - struct ip_conntrack *ct = (struct ip_conntrack *)nfct; - struct ip_conntrack_protocol *proto; - struct ip_conntrack_helper *helper; - typeof(ip_conntrack_destroyed) destroyed; - - DEBUGP("destroy_conntrack(%p)\n", ct); - IP_NF_ASSERT(atomic_read(&nfct->use) == 0); - IP_NF_ASSERT(!timer_pending(&ct->timeout)); - - ip_conntrack_event(IPCT_DESTROY, ct); - set_bit(IPS_DYING_BIT, &ct->status); - - helper = ct->helper; - if (helper && helper->destroy) - helper->destroy(ct); - - /* To make sure we don't get any weird locking issues here: - * destroy_conntrack() MUST NOT be called with a write lock - * to ip_conntrack_lock!!! -HW */ - rcu_read_lock(); - proto = __ip_conntrack_proto_find(ct->tuplehash[IP_CT_DIR_REPLY].tuple.dst.protonum); - if (proto && proto->destroy) - proto->destroy(ct); - - destroyed = rcu_dereference(ip_conntrack_destroyed); - if (destroyed) - destroyed(ct); - - rcu_read_unlock(); - - write_lock_bh(&ip_conntrack_lock); - /* Expectations will have been removed in clean_from_lists, - * except TFTP can create an expectation on the first packet, - * before connection is in the list, so we need to clean here, - * too. */ - ip_ct_remove_expectations(ct); - - /* We overload first tuple to link into unconfirmed list. */ - if (!is_confirmed(ct)) { - BUG_ON(list_empty(&ct->tuplehash[IP_CT_DIR_ORIGINAL].list)); - list_del(&ct->tuplehash[IP_CT_DIR_ORIGINAL].list); - } - - CONNTRACK_STAT_INC(delete); - write_unlock_bh(&ip_conntrack_lock); - - if (ct->master) - ip_conntrack_put(ct->master); - - DEBUGP("destroy_conntrack: returning ct=%p to slab\n", ct); - ip_conntrack_free(ct); -} - -static void death_by_timeout(unsigned long ul_conntrack) -{ - struct ip_conntrack *ct = (void *)ul_conntrack; - - write_lock_bh(&ip_conntrack_lock); - /* Inside lock so preempt is disabled on module removal path. - * Otherwise we can get spurious warnings. */ - CONNTRACK_STAT_INC(delete_list); - clean_from_lists(ct); - write_unlock_bh(&ip_conntrack_lock); - ip_conntrack_put(ct); -} - -struct ip_conntrack_tuple_hash * -__ip_conntrack_find(const struct ip_conntrack_tuple *tuple, - const struct ip_conntrack *ignored_conntrack) -{ - struct ip_conntrack_tuple_hash *h; - unsigned int hash = hash_conntrack(tuple); - - list_for_each_entry(h, &ip_conntrack_hash[hash], list) { - if (tuplehash_to_ctrack(h) != ignored_conntrack && - ip_ct_tuple_equal(tuple, &h->tuple)) { - CONNTRACK_STAT_INC(found); - return h; - } - CONNTRACK_STAT_INC(searched); - } - - return NULL; -} - -/* Find a connection corresponding to a tuple. */ -struct ip_conntrack_tuple_hash * -ip_conntrack_find_get(const struct ip_conntrack_tuple *tuple, - const struct ip_conntrack *ignored_conntrack) -{ - struct ip_conntrack_tuple_hash *h; - - read_lock_bh(&ip_conntrack_lock); - h = __ip_conntrack_find(tuple, ignored_conntrack); - if (h) - atomic_inc(&tuplehash_to_ctrack(h)->ct_general.use); - read_unlock_bh(&ip_conntrack_lock); - - return h; -} - -static void __ip_conntrack_hash_insert(struct ip_conntrack *ct, - unsigned int hash, - unsigned int repl_hash) -{ - ct->id = ++ip_conntrack_next_id; - list_add(&ct->tuplehash[IP_CT_DIR_ORIGINAL].list, - &ip_conntrack_hash[hash]); - list_add(&ct->tuplehash[IP_CT_DIR_REPLY].list, - &ip_conntrack_hash[repl_hash]); -} - -void ip_conntrack_hash_insert(struct ip_conntrack *ct) -{ - unsigned int hash, repl_hash; - - hash = hash_conntrack(&ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple); - repl_hash = hash_conntrack(&ct->tuplehash[IP_CT_DIR_REPLY].tuple); - - write_lock_bh(&ip_conntrack_lock); - __ip_conntrack_hash_insert(ct, hash, repl_hash); - write_unlock_bh(&ip_conntrack_lock); -} - -/* Confirm a connection given skb; places it in hash table */ -int -__ip_conntrack_confirm(struct sk_buff **pskb) -{ - unsigned int hash, repl_hash; - struct ip_conntrack_tuple_hash *h; - struct ip_conntrack *ct; - enum ip_conntrack_info ctinfo; - - ct = ip_conntrack_get(*pskb, &ctinfo); - - /* ipt_REJECT uses ip_conntrack_attach to attach related - ICMP/TCP RST packets in other direction. Actual packet - which created connection will be IP_CT_NEW or for an - expected connection, IP_CT_RELATED. */ - if (CTINFO2DIR(ctinfo) != IP_CT_DIR_ORIGINAL) - return NF_ACCEPT; - - hash = hash_conntrack(&ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple); - repl_hash = hash_conntrack(&ct->tuplehash[IP_CT_DIR_REPLY].tuple); - - /* We're not in hash table, and we refuse to set up related - connections for unconfirmed conns. But packet copies and - REJECT will give spurious warnings here. */ - /* IP_NF_ASSERT(atomic_read(&ct->ct_general.use) == 1); */ - - /* No external references means noone else could have - confirmed us. */ - IP_NF_ASSERT(!is_confirmed(ct)); - DEBUGP("Confirming conntrack %p\n", ct); - - write_lock_bh(&ip_conntrack_lock); - - /* See if there's one in the list already, including reverse: - NAT could have grabbed it without realizing, since we're - not in the hash. If there is, we lost race. */ - list_for_each_entry(h, &ip_conntrack_hash[hash], list) - if (ip_ct_tuple_equal(&ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple, - &h->tuple)) - goto out; - list_for_each_entry(h, &ip_conntrack_hash[repl_hash], list) - if (ip_ct_tuple_equal(&ct->tuplehash[IP_CT_DIR_REPLY].tuple, - &h->tuple)) - goto out; - - /* Remove from unconfirmed list */ - list_del(&ct->tuplehash[IP_CT_DIR_ORIGINAL].list); - - __ip_conntrack_hash_insert(ct, hash, repl_hash); - /* Timer relative to confirmation time, not original - setting time, otherwise we'd get timer wrap in - weird delay cases. */ - ct->timeout.expires += jiffies; - add_timer(&ct->timeout); - atomic_inc(&ct->ct_general.use); - set_bit(IPS_CONFIRMED_BIT, &ct->status); - CONNTRACK_STAT_INC(insert); - write_unlock_bh(&ip_conntrack_lock); - if (ct->helper) - ip_conntrack_event_cache(IPCT_HELPER, *pskb); -#ifdef CONFIG_IP_NF_NAT_NEEDED - if (test_bit(IPS_SRC_NAT_DONE_BIT, &ct->status) || - test_bit(IPS_DST_NAT_DONE_BIT, &ct->status)) - ip_conntrack_event_cache(IPCT_NATINFO, *pskb); -#endif - ip_conntrack_event_cache(master_ct(ct) ? - IPCT_RELATED : IPCT_NEW, *pskb); - - return NF_ACCEPT; - -out: - CONNTRACK_STAT_INC(insert_failed); - write_unlock_bh(&ip_conntrack_lock); - return NF_DROP; -} - -/* Returns true if a connection correspondings to the tuple (required - for NAT). */ -int -ip_conntrack_tuple_taken(const struct ip_conntrack_tuple *tuple, - const struct ip_conntrack *ignored_conntrack) -{ - struct ip_conntrack_tuple_hash *h; - - read_lock_bh(&ip_conntrack_lock); - h = __ip_conntrack_find(tuple, ignored_conntrack); - read_unlock_bh(&ip_conntrack_lock); - - return h != NULL; -} - -/* There's a small race here where we may free a just-assured - connection. Too bad: we're in trouble anyway. */ -static int early_drop(struct list_head *chain) -{ - /* Traverse backwards: gives us oldest, which is roughly LRU */ - struct ip_conntrack_tuple_hash *h; - struct ip_conntrack *ct = NULL, *tmp; - int dropped = 0; - - read_lock_bh(&ip_conntrack_lock); - list_for_each_entry_reverse(h, chain, list) { - tmp = tuplehash_to_ctrack(h); - if (!test_bit(IPS_ASSURED_BIT, &tmp->status)) { - ct = tmp; - atomic_inc(&ct->ct_general.use); - break; - } - } - read_unlock_bh(&ip_conntrack_lock); - - if (!ct) - return dropped; - - if (del_timer(&ct->timeout)) { - death_by_timeout((unsigned long)ct); - dropped = 1; - CONNTRACK_STAT_INC_ATOMIC(early_drop); - } - ip_conntrack_put(ct); - return dropped; -} - -static struct ip_conntrack_helper * -__ip_conntrack_helper_find( const struct ip_conntrack_tuple *tuple) -{ - struct ip_conntrack_helper *h; - - list_for_each_entry(h, &helpers, list) { - if (ip_ct_tuple_mask_cmp(tuple, &h->tuple, &h->mask)) - return h; - } - return NULL; -} - -struct ip_conntrack_helper * -ip_conntrack_helper_find_get( const struct ip_conntrack_tuple *tuple) -{ - struct ip_conntrack_helper *helper; - - /* need ip_conntrack_lock to assure that helper exists until - * try_module_get() is called */ - read_lock_bh(&ip_conntrack_lock); - - helper = __ip_conntrack_helper_find(tuple); - if (helper) { - /* need to increase module usage count to assure helper will - * not go away while the caller is e.g. busy putting a - * conntrack in the hash that uses the helper */ - if (!try_module_get(helper->me)) - helper = NULL; - } - - read_unlock_bh(&ip_conntrack_lock); - - return helper; -} - -void ip_conntrack_helper_put(struct ip_conntrack_helper *helper) -{ - module_put(helper->me); -} - -struct ip_conntrack_protocol * -__ip_conntrack_proto_find(u_int8_t protocol) -{ - return ip_ct_protos[protocol]; -} - -/* this is guaranteed to always return a valid protocol helper, since - * it falls back to generic_protocol */ -struct ip_conntrack_protocol * -ip_conntrack_proto_find_get(u_int8_t protocol) -{ - struct ip_conntrack_protocol *p; - - rcu_read_lock(); - p = __ip_conntrack_proto_find(protocol); - if (p) { - if (!try_module_get(p->me)) - p = &ip_conntrack_generic_protocol; - } - rcu_read_unlock(); - - return p; -} - -void ip_conntrack_proto_put(struct ip_conntrack_protocol *p) -{ - module_put(p->me); -} - -struct ip_conntrack *ip_conntrack_alloc(struct ip_conntrack_tuple *orig, - struct ip_conntrack_tuple *repl) -{ - struct ip_conntrack *conntrack; - - if (!ip_conntrack_hash_rnd_initted) { - get_random_bytes(&ip_conntrack_hash_rnd, 4); - ip_conntrack_hash_rnd_initted = 1; - } - - /* We don't want any race condition at early drop stage */ - atomic_inc(&ip_conntrack_count); - - if (ip_conntrack_max - && atomic_read(&ip_conntrack_count) > ip_conntrack_max) { - unsigned int hash = hash_conntrack(orig); - /* Try dropping from this hash chain. */ - if (!early_drop(&ip_conntrack_hash[hash])) { - atomic_dec(&ip_conntrack_count); - if (net_ratelimit()) - printk(KERN_WARNING - "ip_conntrack: table full, dropping" - " packet.\n"); - return ERR_PTR(-ENOMEM); - } - } - - conntrack = kmem_cache_zalloc(ip_conntrack_cachep, GFP_ATOMIC); - if (!conntrack) { - DEBUGP("Can't allocate conntrack.\n"); - atomic_dec(&ip_conntrack_count); - return ERR_PTR(-ENOMEM); - } - - atomic_set(&conntrack->ct_general.use, 1); - conntrack->ct_general.destroy = destroy_conntrack; - conntrack->tuplehash[IP_CT_DIR_ORIGINAL].tuple = *orig; - conntrack->tuplehash[IP_CT_DIR_REPLY].tuple = *repl; - /* Don't set timer yet: wait for confirmation */ - init_timer(&conntrack->timeout); - conntrack->timeout.data = (unsigned long)conntrack; - conntrack->timeout.function = death_by_timeout; - - return conntrack; -} - -void -ip_conntrack_free(struct ip_conntrack *conntrack) -{ - atomic_dec(&ip_conntrack_count); - kmem_cache_free(ip_conntrack_cachep, conntrack); -} - -/* Allocate a new conntrack: we return -ENOMEM if classification - * failed due to stress. Otherwise it really is unclassifiable */ -static struct ip_conntrack_tuple_hash * -init_conntrack(struct ip_conntrack_tuple *tuple, - struct ip_conntrack_protocol *protocol, - struct sk_buff *skb) -{ - struct ip_conntrack *conntrack; - struct ip_conntrack_tuple repl_tuple; - struct ip_conntrack_expect *exp; - - if (!ip_ct_invert_tuple(&repl_tuple, tuple, protocol)) { - DEBUGP("Can't invert tuple.\n"); - return NULL; - } - - conntrack = ip_conntrack_alloc(tuple, &repl_tuple); - if (conntrack == NULL || IS_ERR(conntrack)) - return (struct ip_conntrack_tuple_hash *)conntrack; - - if (!protocol->new(conntrack, skb)) { - ip_conntrack_free(conntrack); - return NULL; - } - - write_lock_bh(&ip_conntrack_lock); - exp = find_expectation(tuple); - - if (exp) { - DEBUGP("conntrack: expectation arrives ct=%p exp=%p\n", - conntrack, exp); - /* Welcome, Mr. Bond. We've been expecting you... */ - __set_bit(IPS_EXPECTED_BIT, &conntrack->status); - conntrack->master = exp->master; -#ifdef CONFIG_IP_NF_CONNTRACK_MARK - conntrack->mark = exp->master->mark; -#endif -#if defined(CONFIG_IP_NF_TARGET_MASQUERADE) || \ - defined(CONFIG_IP_NF_TARGET_MASQUERADE_MODULE) - /* this is ugly, but there is no other place where to put it */ - conntrack->nat.masq_index = exp->master->nat.masq_index; -#endif -#ifdef CONFIG_IP_NF_CONNTRACK_SECMARK - conntrack->secmark = exp->master->secmark; -#endif - nf_conntrack_get(&conntrack->master->ct_general); - CONNTRACK_STAT_INC(expect_new); - } else { - conntrack->helper = __ip_conntrack_helper_find(&repl_tuple); - - CONNTRACK_STAT_INC(new); - } - - /* Overload tuple linked list to put us in unconfirmed list. */ - list_add(&conntrack->tuplehash[IP_CT_DIR_ORIGINAL].list, &unconfirmed); - - write_unlock_bh(&ip_conntrack_lock); - - if (exp) { - if (exp->expectfn) - exp->expectfn(conntrack, exp); - ip_conntrack_expect_put(exp); - } - - return &conntrack->tuplehash[IP_CT_DIR_ORIGINAL]; -} - -/* On success, returns conntrack ptr, sets skb->nfct and ctinfo */ -static inline struct ip_conntrack * -resolve_normal_ct(struct sk_buff *skb, - struct ip_conntrack_protocol *proto, - int *set_reply, - unsigned int hooknum, - enum ip_conntrack_info *ctinfo) -{ - struct ip_conntrack_tuple tuple; - struct ip_conntrack_tuple_hash *h; - struct ip_conntrack *ct; - - IP_NF_ASSERT((skb->nh.iph->frag_off & htons(IP_OFFSET)) == 0); - - if (!ip_ct_get_tuple(skb->nh.iph, skb, skb->nh.iph->ihl*4, - &tuple,proto)) - return NULL; - - /* look for tuple match */ - h = ip_conntrack_find_get(&tuple, NULL); - if (!h) { - h = init_conntrack(&tuple, proto, skb); - if (!h) - return NULL; - if (IS_ERR(h)) - return (void *)h; - } - ct = tuplehash_to_ctrack(h); - - /* It exists; we have (non-exclusive) reference. */ - if (DIRECTION(h) == IP_CT_DIR_REPLY) { - *ctinfo = IP_CT_ESTABLISHED + IP_CT_IS_REPLY; - /* Please set reply bit if this packet OK */ - *set_reply = 1; - } else { - /* Once we've had two way comms, always ESTABLISHED. */ - if (test_bit(IPS_SEEN_REPLY_BIT, &ct->status)) { - DEBUGP("ip_conntrack_in: normal packet for %p\n", - ct); - *ctinfo = IP_CT_ESTABLISHED; - } else if (test_bit(IPS_EXPECTED_BIT, &ct->status)) { - DEBUGP("ip_conntrack_in: related packet for %p\n", - ct); - *ctinfo = IP_CT_RELATED; - } else { - DEBUGP("ip_conntrack_in: new packet for %p\n", - ct); - *ctinfo = IP_CT_NEW; - } - *set_reply = 0; - } - skb->nfct = &ct->ct_general; - skb->nfctinfo = *ctinfo; - return ct; -} - -/* Netfilter hook itself. */ -unsigned int ip_conntrack_in(unsigned int hooknum, - struct sk_buff **pskb, - const struct net_device *in, - const struct net_device *out, - int (*okfn)(struct sk_buff *)) -{ - struct ip_conntrack *ct; - enum ip_conntrack_info ctinfo; - struct ip_conntrack_protocol *proto; - int set_reply = 0; - int ret; - - /* Previously seen (loopback or untracked)? Ignore. */ - if ((*pskb)->nfct) { - CONNTRACK_STAT_INC_ATOMIC(ignore); - return NF_ACCEPT; - } - - /* Never happen */ - if ((*pskb)->nh.iph->frag_off & htons(IP_OFFSET)) { - if (net_ratelimit()) { - printk(KERN_ERR "ip_conntrack_in: Frag of proto %u (hook=%u)\n", - (*pskb)->nh.iph->protocol, hooknum); - } - return NF_DROP; - } - -/* Doesn't cover locally-generated broadcast, so not worth it. */ -#if 0 - /* Ignore broadcast: no `connection'. */ - if ((*pskb)->pkt_type == PACKET_BROADCAST) { - printk("Broadcast packet!\n"); - return NF_ACCEPT; - } else if (((*pskb)->nh.iph->daddr & htonl(0x000000FF)) - == htonl(0x000000FF)) { - printk("Should bcast: %u.%u.%u.%u->%u.%u.%u.%u (sk=%p, ptype=%u)\n", - NIPQUAD((*pskb)->nh.iph->saddr), - NIPQUAD((*pskb)->nh.iph->daddr), - (*pskb)->sk, (*pskb)->pkt_type); - } -#endif - - /* rcu_read_lock()ed by nf_hook_slow */ - proto = __ip_conntrack_proto_find((*pskb)->nh.iph->protocol); - - /* It may be an special packet, error, unclean... - * inverse of the return code tells to the netfilter - * core what to do with the packet. */ - if (proto->error != NULL - && (ret = proto->error(*pskb, &ctinfo, hooknum)) <= 0) { - CONNTRACK_STAT_INC_ATOMIC(error); - CONNTRACK_STAT_INC_ATOMIC(invalid); - return -ret; - } - - if (!(ct = resolve_normal_ct(*pskb, proto,&set_reply,hooknum,&ctinfo))) { - /* Not valid part of a connection */ - CONNTRACK_STAT_INC_ATOMIC(invalid); - return NF_ACCEPT; - } - - if (IS_ERR(ct)) { - /* Too stressed to deal. */ - CONNTRACK_STAT_INC_ATOMIC(drop); - return NF_DROP; - } - - IP_NF_ASSERT((*pskb)->nfct); - - ret = proto->packet(ct, *pskb, ctinfo); - if (ret < 0) { - /* Invalid: inverse of the return code tells - * the netfilter core what to do*/ - nf_conntrack_put((*pskb)->nfct); - (*pskb)->nfct = NULL; - CONNTRACK_STAT_INC_ATOMIC(invalid); - return -ret; - } - - if (set_reply && !test_and_set_bit(IPS_SEEN_REPLY_BIT, &ct->status)) - ip_conntrack_event_cache(IPCT_STATUS, *pskb); - - return ret; -} - -int invert_tuplepr(struct ip_conntrack_tuple *inverse, - const struct ip_conntrack_tuple *orig) -{ - struct ip_conntrack_protocol *proto; - int ret; - - rcu_read_lock(); - proto = __ip_conntrack_proto_find(orig->dst.protonum); - ret = ip_ct_invert_tuple(inverse, orig, proto); - rcu_read_unlock(); - - return ret; -} - -/* Would two expected things clash? */ -static inline int expect_clash(const struct ip_conntrack_expect *a, - const struct ip_conntrack_expect *b) -{ - /* Part covered by intersection of masks must be unequal, - otherwise they clash */ - struct ip_conntrack_tuple intersect_mask - = { { a->mask.src.ip & b->mask.src.ip, - { a->mask.src.u.all & b->mask.src.u.all } }, - { a->mask.dst.ip & b->mask.dst.ip, - { a->mask.dst.u.all & b->mask.dst.u.all }, - a->mask.dst.protonum & b->mask.dst.protonum } }; - - return ip_ct_tuple_mask_cmp(&a->tuple, &b->tuple, &intersect_mask); -} - -static inline int expect_matches(const struct ip_conntrack_expect *a, - const struct ip_conntrack_expect *b) -{ - return a->master == b->master - && ip_ct_tuple_equal(&a->tuple, &b->tuple) - && ip_ct_tuple_equal(&a->mask, &b->mask); -} - -/* Generally a bad idea to call this: could have matched already. */ -void ip_conntrack_unexpect_related(struct ip_conntrack_expect *exp) -{ - struct ip_conntrack_expect *i; - - write_lock_bh(&ip_conntrack_lock); - /* choose the the oldest expectation to evict */ - list_for_each_entry_reverse(i, &ip_conntrack_expect_list, list) { - if (expect_matches(i, exp) && del_timer(&i->timeout)) { - ip_ct_unlink_expect(i); - write_unlock_bh(&ip_conntrack_lock); - ip_conntrack_expect_put(i); - return; - } - } - write_unlock_bh(&ip_conntrack_lock); -} - -/* We don't increase the master conntrack refcount for non-fulfilled - * conntracks. During the conntrack destruction, the expectations are - * always killed before the conntrack itself */ -struct ip_conntrack_expect *ip_conntrack_expect_alloc(struct ip_conntrack *me) -{ - struct ip_conntrack_expect *new; - - new = kmem_cache_alloc(ip_conntrack_expect_cachep, GFP_ATOMIC); - if (!new) { - DEBUGP("expect_related: OOM allocating expect\n"); - return NULL; - } - new->master = me; - atomic_set(&new->use, 1); - return new; -} - -void ip_conntrack_expect_put(struct ip_conntrack_expect *exp) -{ - if (atomic_dec_and_test(&exp->use)) - kmem_cache_free(ip_conntrack_expect_cachep, exp); -} - -static void ip_conntrack_expect_insert(struct ip_conntrack_expect *exp) -{ - atomic_inc(&exp->use); - exp->master->expecting++; - list_add(&exp->list, &ip_conntrack_expect_list); - - init_timer(&exp->timeout); - exp->timeout.data = (unsigned long)exp; - exp->timeout.function = expectation_timed_out; - exp->timeout.expires = jiffies + exp->master->helper->timeout * HZ; - add_timer(&exp->timeout); - - exp->id = ++ip_conntrack_expect_next_id; - atomic_inc(&exp->use); - CONNTRACK_STAT_INC(expect_create); -} - -/* Race with expectations being used means we could have none to find; OK. */ -static void evict_oldest_expect(struct ip_conntrack *master) -{ - struct ip_conntrack_expect *i; - - list_for_each_entry_reverse(i, &ip_conntrack_expect_list, list) { - if (i->master == master) { - if (del_timer(&i->timeout)) { - ip_ct_unlink_expect(i); - ip_conntrack_expect_put(i); - } - break; - } - } -} - -static inline int refresh_timer(struct ip_conntrack_expect *i) -{ - if (!del_timer(&i->timeout)) - return 0; - - i->timeout.expires = jiffies + i->master->helper->timeout*HZ; - add_timer(&i->timeout); - return 1; -} - -int ip_conntrack_expect_related(struct ip_conntrack_expect *expect) -{ - struct ip_conntrack_expect *i; - int ret; - - DEBUGP("ip_conntrack_expect_related %p\n", related_to); - DEBUGP("tuple: "); DUMP_TUPLE(&expect->tuple); - DEBUGP("mask: "); DUMP_TUPLE(&expect->mask); - - write_lock_bh(&ip_conntrack_lock); - list_for_each_entry(i, &ip_conntrack_expect_list, list) { - if (expect_matches(i, expect)) { - /* Refresh timer: if it's dying, ignore.. */ - if (refresh_timer(i)) { - ret = 0; - goto out; - } - } else if (expect_clash(i, expect)) { - ret = -EBUSY; - goto out; - } - } - - /* Will be over limit? */ - if (expect->master->helper->max_expected && - expect->master->expecting >= expect->master->helper->max_expected) - evict_oldest_expect(expect->master); - - ip_conntrack_expect_insert(expect); - ip_conntrack_expect_event(IPEXP_NEW, expect); - ret = 0; -out: - write_unlock_bh(&ip_conntrack_lock); - return ret; -} - -/* Alter reply tuple (maybe alter helper). This is for NAT, and is - implicitly racy: see __ip_conntrack_confirm */ -void ip_conntrack_alter_reply(struct ip_conntrack *conntrack, - const struct ip_conntrack_tuple *newreply) -{ - write_lock_bh(&ip_conntrack_lock); - /* Should be unconfirmed, so not in hash table yet */ - IP_NF_ASSERT(!is_confirmed(conntrack)); - - DEBUGP("Altering reply tuple of %p to ", conntrack); - DUMP_TUPLE(newreply); - - conntrack->tuplehash[IP_CT_DIR_REPLY].tuple = *newreply; - if (!conntrack->master && conntrack->expecting == 0) - conntrack->helper = __ip_conntrack_helper_find(newreply); - write_unlock_bh(&ip_conntrack_lock); -} - -int ip_conntrack_helper_register(struct ip_conntrack_helper *me) -{ - BUG_ON(me->timeout == 0); - write_lock_bh(&ip_conntrack_lock); - list_add(&me->list, &helpers); - write_unlock_bh(&ip_conntrack_lock); - - return 0; -} - -struct ip_conntrack_helper * -__ip_conntrack_helper_find_byname(const char *name) -{ - struct ip_conntrack_helper *h; - - list_for_each_entry(h, &helpers, list) { - if (!strcmp(h->name, name)) - return h; - } - - return NULL; -} - -static inline void unhelp(struct ip_conntrack_tuple_hash *i, - const struct ip_conntrack_helper *me) -{ - if (tuplehash_to_ctrack(i)->helper == me) { - ip_conntrack_event(IPCT_HELPER, tuplehash_to_ctrack(i)); - tuplehash_to_ctrack(i)->helper = NULL; - } -} - -void ip_conntrack_helper_unregister(struct ip_conntrack_helper *me) -{ - unsigned int i; - struct ip_conntrack_tuple_hash *h; - struct ip_conntrack_expect *exp, *tmp; - - /* Need write lock here, to delete helper. */ - write_lock_bh(&ip_conntrack_lock); - list_del(&me->list); - - /* Get rid of expectations */ - list_for_each_entry_safe(exp, tmp, &ip_conntrack_expect_list, list) { - if (exp->master->helper == me && del_timer(&exp->timeout)) { - ip_ct_unlink_expect(exp); - ip_conntrack_expect_put(exp); - } - } - /* Get rid of expecteds, set helpers to NULL. */ - list_for_each_entry(h, &unconfirmed, list) - unhelp(h, me); - for (i = 0; i < ip_conntrack_htable_size; i++) { - list_for_each_entry(h, &ip_conntrack_hash[i], list) - unhelp(h, me); - } - write_unlock_bh(&ip_conntrack_lock); - - /* Someone could be still looking at the helper in a bh. */ - synchronize_net(); -} - -/* Refresh conntrack for this many jiffies and do accounting if do_acct is 1 */ -void __ip_ct_refresh_acct(struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - const struct sk_buff *skb, - unsigned long extra_jiffies, - int do_acct) -{ - int event = 0; - - IP_NF_ASSERT(ct->timeout.data == (unsigned long)ct); - IP_NF_ASSERT(skb); - - write_lock_bh(&ip_conntrack_lock); - - /* Only update if this is not a fixed timeout */ - if (test_bit(IPS_FIXED_TIMEOUT_BIT, &ct->status)) { - write_unlock_bh(&ip_conntrack_lock); - return; - } - - /* If not in hash table, timer will not be active yet */ - if (!is_confirmed(ct)) { - ct->timeout.expires = extra_jiffies; - event = IPCT_REFRESH; - } else { - /* Need del_timer for race avoidance (may already be dying). */ - if (del_timer(&ct->timeout)) { - ct->timeout.expires = jiffies + extra_jiffies; - add_timer(&ct->timeout); - event = IPCT_REFRESH; - } - } - -#ifdef CONFIG_IP_NF_CT_ACCT - if (do_acct) { - ct->counters[CTINFO2DIR(ctinfo)].packets++; - ct->counters[CTINFO2DIR(ctinfo)].bytes += - ntohs(skb->nh.iph->tot_len); - if ((ct->counters[CTINFO2DIR(ctinfo)].packets & 0x80000000) - || (ct->counters[CTINFO2DIR(ctinfo)].bytes & 0x80000000)) - event |= IPCT_COUNTER_FILLING; - } -#endif - - write_unlock_bh(&ip_conntrack_lock); - - /* must be unlocked when calling event cache */ - if (event) - ip_conntrack_event_cache(event, skb); -} - -#if defined(CONFIG_IP_NF_CONNTRACK_NETLINK) || \ - defined(CONFIG_IP_NF_CONNTRACK_NETLINK_MODULE) -/* Generic function for tcp/udp/sctp/dccp and alike. This needs to be - * in ip_conntrack_core, since we don't want the protocols to autoload - * or depend on ctnetlink */ -int ip_ct_port_tuple_to_nfattr(struct sk_buff *skb, - const struct ip_conntrack_tuple *tuple) -{ - NFA_PUT(skb, CTA_PROTO_SRC_PORT, sizeof(__be16), - &tuple->src.u.tcp.port); - NFA_PUT(skb, CTA_PROTO_DST_PORT, sizeof(__be16), - &tuple->dst.u.tcp.port); - return 0; - -nfattr_failure: - return -1; -} - -int ip_ct_port_nfattr_to_tuple(struct nfattr *tb[], - struct ip_conntrack_tuple *t) -{ - if (!tb[CTA_PROTO_SRC_PORT-1] || !tb[CTA_PROTO_DST_PORT-1]) - return -EINVAL; - - t->src.u.tcp.port = - *(__be16 *)NFA_DATA(tb[CTA_PROTO_SRC_PORT-1]); - t->dst.u.tcp.port = - *(__be16 *)NFA_DATA(tb[CTA_PROTO_DST_PORT-1]); - - return 0; -} -#endif - -/* Returns new sk_buff, or NULL */ -struct sk_buff * -ip_ct_gather_frags(struct sk_buff *skb, u_int32_t user) -{ - skb_orphan(skb); - - local_bh_disable(); - skb = ip_defrag(skb, user); - local_bh_enable(); - - if (skb) - ip_send_check(skb->nh.iph); - return skb; -} - -/* Used by ipt_REJECT. */ -static void ip_conntrack_attach(struct sk_buff *nskb, struct sk_buff *skb) -{ - struct ip_conntrack *ct; - enum ip_conntrack_info ctinfo; - - /* This ICMP is in reverse direction to the packet which caused it */ - ct = ip_conntrack_get(skb, &ctinfo); - - if (CTINFO2DIR(ctinfo) == IP_CT_DIR_ORIGINAL) - ctinfo = IP_CT_RELATED + IP_CT_IS_REPLY; - else - ctinfo = IP_CT_RELATED; - - /* Attach to new skbuff, and increment count */ - nskb->nfct = &ct->ct_general; - nskb->nfctinfo = ctinfo; - nf_conntrack_get(nskb->nfct); -} - -/* Bring out ya dead! */ -static struct ip_conntrack * -get_next_corpse(int (*iter)(struct ip_conntrack *i, void *data), - void *data, unsigned int *bucket) -{ - struct ip_conntrack_tuple_hash *h; - struct ip_conntrack *ct; - - write_lock_bh(&ip_conntrack_lock); - for (; *bucket < ip_conntrack_htable_size; (*bucket)++) { - list_for_each_entry(h, &ip_conntrack_hash[*bucket], list) { - ct = tuplehash_to_ctrack(h); - if (iter(ct, data)) - goto found; - } - } - list_for_each_entry(h, &unconfirmed, list) { - ct = tuplehash_to_ctrack(h); - if (iter(ct, data)) - set_bit(IPS_DYING_BIT, &ct->status); - } - write_unlock_bh(&ip_conntrack_lock); - return NULL; - -found: - atomic_inc(&ct->ct_general.use); - write_unlock_bh(&ip_conntrack_lock); - return ct; -} - -void -ip_ct_iterate_cleanup(int (*iter)(struct ip_conntrack *i, void *), void *data) -{ - struct ip_conntrack *ct; - unsigned int bucket = 0; - - while ((ct = get_next_corpse(iter, data, &bucket)) != NULL) { - /* Time to push up daises... */ - if (del_timer(&ct->timeout)) - death_by_timeout((unsigned long)ct); - /* ... else the timer will get him soon. */ - - ip_conntrack_put(ct); - } -} - -/* Fast function for those who don't want to parse /proc (and I don't - blame them). */ -/* Reversing the socket's dst/src point of view gives us the reply - mapping. */ -static int -getorigdst(struct sock *sk, int optval, void __user *user, int *len) -{ - struct inet_sock *inet = inet_sk(sk); - struct ip_conntrack_tuple_hash *h; - struct ip_conntrack_tuple tuple; - - IP_CT_TUPLE_U_BLANK(&tuple); - tuple.src.ip = inet->rcv_saddr; - tuple.src.u.tcp.port = inet->sport; - tuple.dst.ip = inet->daddr; - tuple.dst.u.tcp.port = inet->dport; - tuple.dst.protonum = IPPROTO_TCP; - - /* We only do TCP at the moment: is there a better way? */ - if (strcmp(sk->sk_prot->name, "TCP")) { - DEBUGP("SO_ORIGINAL_DST: Not a TCP socket\n"); - return -ENOPROTOOPT; - } - - if ((unsigned int) *len < sizeof(struct sockaddr_in)) { - DEBUGP("SO_ORIGINAL_DST: len %u not %u\n", - *len, sizeof(struct sockaddr_in)); - return -EINVAL; - } - - h = ip_conntrack_find_get(&tuple, NULL); - if (h) { - struct sockaddr_in sin; - struct ip_conntrack *ct = tuplehash_to_ctrack(h); - - sin.sin_family = AF_INET; - sin.sin_port = ct->tuplehash[IP_CT_DIR_ORIGINAL] - .tuple.dst.u.tcp.port; - sin.sin_addr.s_addr = ct->tuplehash[IP_CT_DIR_ORIGINAL] - .tuple.dst.ip; - memset(sin.sin_zero, 0, sizeof(sin.sin_zero)); - - DEBUGP("SO_ORIGINAL_DST: %u.%u.%u.%u %u\n", - NIPQUAD(sin.sin_addr.s_addr), ntohs(sin.sin_port)); - ip_conntrack_put(ct); - if (copy_to_user(user, &sin, sizeof(sin)) != 0) - return -EFAULT; - else - return 0; - } - DEBUGP("SO_ORIGINAL_DST: Can't find %u.%u.%u.%u/%u-%u.%u.%u.%u/%u.\n", - NIPQUAD(tuple.src.ip), ntohs(tuple.src.u.tcp.port), - NIPQUAD(tuple.dst.ip), ntohs(tuple.dst.u.tcp.port)); - return -ENOENT; -} - -static struct nf_sockopt_ops so_getorigdst = { - .pf = PF_INET, - .get_optmin = SO_ORIGINAL_DST, - .get_optmax = SO_ORIGINAL_DST+1, - .get = &getorigdst, -}; - -static int kill_all(struct ip_conntrack *i, void *data) -{ - return 1; -} - -void ip_conntrack_flush(void) -{ - ip_ct_iterate_cleanup(kill_all, NULL); -} - -static void free_conntrack_hash(struct list_head *hash, int vmalloced,int size) -{ - if (vmalloced) - vfree(hash); - else - free_pages((unsigned long)hash, - get_order(sizeof(struct list_head) * size)); -} - -/* Mishearing the voices in his head, our hero wonders how he's - supposed to kill the mall. */ -void ip_conntrack_cleanup(void) -{ - rcu_assign_pointer(ip_ct_attach, NULL); - - /* This makes sure all current packets have passed through - netfilter framework. Roll on, two-stage module - delete... */ - synchronize_net(); - - ip_ct_event_cache_flush(); - i_see_dead_people: - ip_conntrack_flush(); - if (atomic_read(&ip_conntrack_count) != 0) { - schedule(); - goto i_see_dead_people; - } - /* wait until all references to ip_conntrack_untracked are dropped */ - while (atomic_read(&ip_conntrack_untracked.ct_general.use) > 1) - schedule(); - - kmem_cache_destroy(ip_conntrack_cachep); - kmem_cache_destroy(ip_conntrack_expect_cachep); - free_conntrack_hash(ip_conntrack_hash, ip_conntrack_vmalloc, - ip_conntrack_htable_size); - nf_unregister_sockopt(&so_getorigdst); -} - -static struct list_head *alloc_hashtable(int size, int *vmalloced) -{ - struct list_head *hash; - unsigned int i; - - *vmalloced = 0; - hash = (void*)__get_free_pages(GFP_KERNEL, - get_order(sizeof(struct list_head) - * size)); - if (!hash) { - *vmalloced = 1; - printk(KERN_WARNING"ip_conntrack: falling back to vmalloc.\n"); - hash = vmalloc(sizeof(struct list_head) * size); - } - - if (hash) - for (i = 0; i < size; i++) - INIT_LIST_HEAD(&hash[i]); - - return hash; -} - -static int set_hashsize(const char *val, struct kernel_param *kp) -{ - int i, bucket, hashsize, vmalloced; - int old_vmalloced, old_size; - int rnd; - struct list_head *hash, *old_hash; - struct ip_conntrack_tuple_hash *h; - - /* On boot, we can set this without any fancy locking. */ - if (!ip_conntrack_htable_size) - return param_set_int(val, kp); - - hashsize = simple_strtol(val, NULL, 0); - if (!hashsize) - return -EINVAL; - - hash = alloc_hashtable(hashsize, &vmalloced); - if (!hash) - return -ENOMEM; - - /* We have to rehash for the new table anyway, so we also can - * use a new random seed */ - get_random_bytes(&rnd, 4); - - write_lock_bh(&ip_conntrack_lock); - for (i = 0; i < ip_conntrack_htable_size; i++) { - while (!list_empty(&ip_conntrack_hash[i])) { - h = list_entry(ip_conntrack_hash[i].next, - struct ip_conntrack_tuple_hash, list); - list_del(&h->list); - bucket = __hash_conntrack(&h->tuple, hashsize, rnd); - list_add_tail(&h->list, &hash[bucket]); - } - } - old_size = ip_conntrack_htable_size; - old_vmalloced = ip_conntrack_vmalloc; - old_hash = ip_conntrack_hash; - - ip_conntrack_htable_size = hashsize; - ip_conntrack_vmalloc = vmalloced; - ip_conntrack_hash = hash; - ip_conntrack_hash_rnd = rnd; - write_unlock_bh(&ip_conntrack_lock); - - free_conntrack_hash(old_hash, old_vmalloced, old_size); - return 0; -} - -module_param_call(hashsize, set_hashsize, param_get_uint, - &ip_conntrack_htable_size, 0600); - -int __init ip_conntrack_init(void) -{ - unsigned int i; - int ret; - - /* Idea from tcp.c: use 1/16384 of memory. On i386: 32MB - * machine has 256 buckets. >= 1GB machines have 8192 buckets. */ - if (!ip_conntrack_htable_size) { - ip_conntrack_htable_size - = (((num_physpages << PAGE_SHIFT) / 16384) - / sizeof(struct list_head)); - if (num_physpages > (1024 * 1024 * 1024 / PAGE_SIZE)) - ip_conntrack_htable_size = 8192; - if (ip_conntrack_htable_size < 16) - ip_conntrack_htable_size = 16; - } - ip_conntrack_max = 8 * ip_conntrack_htable_size; - - printk("ip_conntrack version %s (%u buckets, %d max)" - " - %Zd bytes per conntrack\n", IP_CONNTRACK_VERSION, - ip_conntrack_htable_size, ip_conntrack_max, - sizeof(struct ip_conntrack)); - - ret = nf_register_sockopt(&so_getorigdst); - if (ret != 0) { - printk(KERN_ERR "Unable to register netfilter socket option\n"); - return ret; - } - - ip_conntrack_hash = alloc_hashtable(ip_conntrack_htable_size, - &ip_conntrack_vmalloc); - if (!ip_conntrack_hash) { - printk(KERN_ERR "Unable to create ip_conntrack_hash\n"); - goto err_unreg_sockopt; - } - - ip_conntrack_cachep = kmem_cache_create("ip_conntrack", - sizeof(struct ip_conntrack), 0, - 0, NULL, NULL); - if (!ip_conntrack_cachep) { - printk(KERN_ERR "Unable to create ip_conntrack slab cache\n"); - goto err_free_hash; - } - - ip_conntrack_expect_cachep = kmem_cache_create("ip_conntrack_expect", - sizeof(struct ip_conntrack_expect), - 0, 0, NULL, NULL); - if (!ip_conntrack_expect_cachep) { - printk(KERN_ERR "Unable to create ip_expect slab cache\n"); - goto err_free_conntrack_slab; - } - - /* Don't NEED lock here, but good form anyway. */ - write_lock_bh(&ip_conntrack_lock); - for (i = 0; i < MAX_IP_CT_PROTO; i++) - rcu_assign_pointer(ip_ct_protos[i], &ip_conntrack_generic_protocol); - /* Sew in builtin protocols. */ - rcu_assign_pointer(ip_ct_protos[IPPROTO_TCP], &ip_conntrack_protocol_tcp); - rcu_assign_pointer(ip_ct_protos[IPPROTO_UDP], &ip_conntrack_protocol_udp); - rcu_assign_pointer(ip_ct_protos[IPPROTO_ICMP], &ip_conntrack_protocol_icmp); - write_unlock_bh(&ip_conntrack_lock); - - /* For use by ipt_REJECT */ - rcu_assign_pointer(ip_ct_attach, ip_conntrack_attach); - - /* Set up fake conntrack: - - to never be deleted, not in any hashes */ - atomic_set(&ip_conntrack_untracked.ct_general.use, 1); - /* - and look it like as a confirmed connection */ - set_bit(IPS_CONFIRMED_BIT, &ip_conntrack_untracked.status); - - return ret; - -err_free_conntrack_slab: - kmem_cache_destroy(ip_conntrack_cachep); -err_free_hash: - free_conntrack_hash(ip_conntrack_hash, ip_conntrack_vmalloc, - ip_conntrack_htable_size); -err_unreg_sockopt: - nf_unregister_sockopt(&so_getorigdst); - - return -ENOMEM; -} diff -puN net/ipv4/netfilter/ip_conntrack_ftp.c~git-net /dev/null --- a/net/ipv4/netfilter/ip_conntrack_ftp.c +++ /dev/null @@ -1,520 +0,0 @@ -/* FTP extension for IP connection tracking. */ - -/* (C) 1999-2001 Paul `Rusty' Russell - * (C) 2002-2004 Netfilter Core Team - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - */ - -#include -#include -#include -#include -#include -#include - -#include -#include -#include - -MODULE_LICENSE("GPL"); -MODULE_AUTHOR("Rusty Russell "); -MODULE_DESCRIPTION("ftp connection tracking helper"); - -/* This is slow, but it's simple. --RR */ -static char *ftp_buffer; -static DEFINE_SPINLOCK(ip_ftp_lock); - -#define MAX_PORTS 8 -static unsigned short ports[MAX_PORTS]; -static int ports_c; -module_param_array(ports, ushort, &ports_c, 0400); - -static int loose; -module_param(loose, bool, 0600); - -unsigned int (*ip_nat_ftp_hook)(struct sk_buff **pskb, - enum ip_conntrack_info ctinfo, - enum ip_ct_ftp_type type, - unsigned int matchoff, - unsigned int matchlen, - struct ip_conntrack_expect *exp, - u32 *seq); -EXPORT_SYMBOL_GPL(ip_nat_ftp_hook); - -#if 0 -#define DEBUGP printk -#else -#define DEBUGP(format, args...) -#endif - -static int try_rfc959(const char *, size_t, u_int32_t [], char); -static int try_eprt(const char *, size_t, u_int32_t [], char); -static int try_epsv_response(const char *, size_t, u_int32_t [], char); - -static const struct ftp_search { - const char *pattern; - size_t plen; - char skip; - char term; - enum ip_ct_ftp_type ftptype; - int (*getnum)(const char *, size_t, u_int32_t[], char); -} search[IP_CT_DIR_MAX][2] = { - [IP_CT_DIR_ORIGINAL] = { - { - .pattern = "PORT", - .plen = sizeof("PORT") - 1, - .skip = ' ', - .term = '\r', - .ftptype = IP_CT_FTP_PORT, - .getnum = try_rfc959, - }, - { - .pattern = "EPRT", - .plen = sizeof("EPRT") - 1, - .skip = ' ', - .term = '\r', - .ftptype = IP_CT_FTP_EPRT, - .getnum = try_eprt, - }, - }, - [IP_CT_DIR_REPLY] = { - { - .pattern = "227 ", - .plen = sizeof("227 ") - 1, - .skip = '(', - .term = ')', - .ftptype = IP_CT_FTP_PASV, - .getnum = try_rfc959, - }, - { - .pattern = "229 ", - .plen = sizeof("229 ") - 1, - .skip = '(', - .term = ')', - .ftptype = IP_CT_FTP_EPSV, - .getnum = try_epsv_response, - }, - }, -}; - -static int try_number(const char *data, size_t dlen, u_int32_t array[], - int array_size, char sep, char term) -{ - u_int32_t i, len; - - memset(array, 0, sizeof(array[0])*array_size); - - /* Keep data pointing at next char. */ - for (i = 0, len = 0; len < dlen && i < array_size; len++, data++) { - if (*data >= '0' && *data <= '9') { - array[i] = array[i]*10 + *data - '0'; - } - else if (*data == sep) - i++; - else { - /* Unexpected character; true if it's the - terminator and we're finished. */ - if (*data == term && i == array_size - 1) - return len; - - DEBUGP("Char %u (got %u nums) `%u' unexpected\n", - len, i, *data); - return 0; - } - } - DEBUGP("Failed to fill %u numbers separated by %c\n", array_size, sep); - - return 0; -} - -/* Returns 0, or length of numbers: 192,168,1,1,5,6 */ -static int try_rfc959(const char *data, size_t dlen, u_int32_t array[6], - char term) -{ - return try_number(data, dlen, array, 6, ',', term); -} - -/* Grab port: number up to delimiter */ -static int get_port(const char *data, int start, size_t dlen, char delim, - u_int32_t array[2]) -{ - u_int16_t port = 0; - int i; - - for (i = start; i < dlen; i++) { - /* Finished? */ - if (data[i] == delim) { - if (port == 0) - break; - array[0] = port >> 8; - array[1] = port; - return i + 1; - } - else if (data[i] >= '0' && data[i] <= '9') - port = port*10 + data[i] - '0'; - else /* Some other crap */ - break; - } - return 0; -} - -/* Returns 0, or length of numbers: |1|132.235.1.2|6275| */ -static int try_eprt(const char *data, size_t dlen, u_int32_t array[6], - char term) -{ - char delim; - int length; - - /* First character is delimiter, then "1" for IPv4, then - delimiter again. */ - if (dlen <= 3) return 0; - delim = data[0]; - if (isdigit(delim) || delim < 33 || delim > 126 - || data[1] != '1' || data[2] != delim) - return 0; - - DEBUGP("EPRT: Got |1|!\n"); - /* Now we have IP address. */ - length = try_number(data + 3, dlen - 3, array, 4, '.', delim); - if (length == 0) - return 0; - - DEBUGP("EPRT: Got IP address!\n"); - /* Start offset includes initial "|1|", and trailing delimiter */ - return get_port(data, 3 + length + 1, dlen, delim, array+4); -} - -/* Returns 0, or length of numbers: |||6446| */ -static int try_epsv_response(const char *data, size_t dlen, u_int32_t array[6], - char term) -{ - char delim; - - /* Three delimiters. */ - if (dlen <= 3) return 0; - delim = data[0]; - if (isdigit(delim) || delim < 33 || delim > 126 - || data[1] != delim || data[2] != delim) - return 0; - - return get_port(data, 3, dlen, delim, array+4); -} - -/* Return 1 for match, 0 for accept, -1 for partial. */ -static int find_pattern(const char *data, size_t dlen, - const char *pattern, size_t plen, - char skip, char term, - unsigned int *numoff, - unsigned int *numlen, - u_int32_t array[6], - int (*getnum)(const char *, size_t, u_int32_t[], char)) -{ - size_t i; - - DEBUGP("find_pattern `%s': dlen = %u\n", pattern, dlen); - if (dlen == 0) - return 0; - - if (dlen <= plen) { - /* Short packet: try for partial? */ - if (strnicmp(data, pattern, dlen) == 0) - return -1; - else return 0; - } - - if (strnicmp(data, pattern, plen) != 0) { -#if 0 - size_t i; - - DEBUGP("ftp: string mismatch\n"); - for (i = 0; i < plen; i++) { - DEBUGP("ftp:char %u `%c'(%u) vs `%c'(%u)\n", - i, data[i], data[i], - pattern[i], pattern[i]); - } -#endif - return 0; - } - - DEBUGP("Pattern matches!\n"); - /* Now we've found the constant string, try to skip - to the 'skip' character */ - for (i = plen; data[i] != skip; i++) - if (i == dlen - 1) return -1; - - /* Skip over the last character */ - i++; - - DEBUGP("Skipped up to `%c'!\n", skip); - - *numoff = i; - *numlen = getnum(data + i, dlen - i, array, term); - if (!*numlen) - return -1; - - DEBUGP("Match succeeded!\n"); - return 1; -} - -/* Look up to see if we're just after a \n. */ -static int find_nl_seq(u32 seq, const struct ip_ct_ftp_master *info, int dir) -{ - unsigned int i; - - for (i = 0; i < info->seq_aft_nl_num[dir]; i++) - if (info->seq_aft_nl[dir][i] == seq) - return 1; - return 0; -} - -/* We don't update if it's older than what we have. */ -static void update_nl_seq(u32 nl_seq, struct ip_ct_ftp_master *info, int dir, - struct sk_buff *skb) -{ - unsigned int i, oldest = NUM_SEQ_TO_REMEMBER; - - /* Look for oldest: if we find exact match, we're done. */ - for (i = 0; i < info->seq_aft_nl_num[dir]; i++) { - if (info->seq_aft_nl[dir][i] == nl_seq) - return; - - if (oldest == info->seq_aft_nl_num[dir] - || before(info->seq_aft_nl[dir][i], oldest)) - oldest = i; - } - - if (info->seq_aft_nl_num[dir] < NUM_SEQ_TO_REMEMBER) { - info->seq_aft_nl[dir][info->seq_aft_nl_num[dir]++] = nl_seq; - ip_conntrack_event_cache(IPCT_HELPINFO_VOLATILE, skb); - } else if (oldest != NUM_SEQ_TO_REMEMBER) { - info->seq_aft_nl[dir][oldest] = nl_seq; - ip_conntrack_event_cache(IPCT_HELPINFO_VOLATILE, skb); - } -} - -static int help(struct sk_buff **pskb, - struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo) -{ - unsigned int dataoff, datalen; - struct tcphdr _tcph, *th; - char *fb_ptr; - int ret; - u32 seq, array[6] = { 0 }; - int dir = CTINFO2DIR(ctinfo); - unsigned int matchlen, matchoff; - struct ip_ct_ftp_master *ct_ftp_info = &ct->help.ct_ftp_info; - struct ip_conntrack_expect *exp; - unsigned int i; - int found = 0, ends_in_nl; - typeof(ip_nat_ftp_hook) ip_nat_ftp; - - /* Until there's been traffic both ways, don't look in packets. */ - if (ctinfo != IP_CT_ESTABLISHED - && ctinfo != IP_CT_ESTABLISHED+IP_CT_IS_REPLY) { - DEBUGP("ftp: Conntrackinfo = %u\n", ctinfo); - return NF_ACCEPT; - } - - th = skb_header_pointer(*pskb, (*pskb)->nh.iph->ihl*4, - sizeof(_tcph), &_tcph); - if (th == NULL) - return NF_ACCEPT; - - dataoff = (*pskb)->nh.iph->ihl*4 + th->doff*4; - /* No data? */ - if (dataoff >= (*pskb)->len) { - DEBUGP("ftp: pskblen = %u\n", (*pskb)->len); - return NF_ACCEPT; - } - datalen = (*pskb)->len - dataoff; - - spin_lock_bh(&ip_ftp_lock); - fb_ptr = skb_header_pointer(*pskb, dataoff, - (*pskb)->len - dataoff, ftp_buffer); - BUG_ON(fb_ptr == NULL); - - ends_in_nl = (fb_ptr[datalen - 1] == '\n'); - seq = ntohl(th->seq) + datalen; - - /* Look up to see if we're just after a \n. */ - if (!find_nl_seq(ntohl(th->seq), ct_ftp_info, dir)) { - /* Now if this ends in \n, update ftp info. */ - DEBUGP("ip_conntrack_ftp_help: wrong seq pos %s(%u) or %s(%u)\n", - ct_ftp_info->seq_aft_nl[0][dir] - old_seq_aft_nl_set ? "":"(UNSET) ", old_seq_aft_nl); - ret = NF_ACCEPT; - goto out_update_nl; - } - - /* Initialize IP array to expected address (it's not mentioned - in EPSV responses) */ - array[0] = (ntohl(ct->tuplehash[dir].tuple.src.ip) >> 24) & 0xFF; - array[1] = (ntohl(ct->tuplehash[dir].tuple.src.ip) >> 16) & 0xFF; - array[2] = (ntohl(ct->tuplehash[dir].tuple.src.ip) >> 8) & 0xFF; - array[3] = ntohl(ct->tuplehash[dir].tuple.src.ip) & 0xFF; - - for (i = 0; i < ARRAY_SIZE(search[dir]); i++) { - found = find_pattern(fb_ptr, (*pskb)->len - dataoff, - search[dir][i].pattern, - search[dir][i].plen, - search[dir][i].skip, - search[dir][i].term, - &matchoff, &matchlen, - array, - search[dir][i].getnum); - if (found) break; - } - if (found == -1) { - /* We don't usually drop packets. After all, this is - connection tracking, not packet filtering. - However, it is necessary for accurate tracking in - this case. */ - if (net_ratelimit()) - printk("conntrack_ftp: partial %s %u+%u\n", - search[dir][i].pattern, - ntohl(th->seq), datalen); - ret = NF_DROP; - goto out; - } else if (found == 0) { /* No match */ - ret = NF_ACCEPT; - goto out_update_nl; - } - - DEBUGP("conntrack_ftp: match `%s' (%u bytes at %u)\n", - fb_ptr + matchoff, matchlen, ntohl(th->seq) + matchoff); - - /* Allocate expectation which will be inserted */ - exp = ip_conntrack_expect_alloc(ct); - if (exp == NULL) { - ret = NF_DROP; - goto out; - } - - /* We refer to the reverse direction ("!dir") tuples here, - * because we're expecting something in the other direction. - * Doesn't matter unless NAT is happening. */ - exp->tuple.dst.ip = ct->tuplehash[!dir].tuple.dst.ip; - - if (htonl((array[0] << 24) | (array[1] << 16) | (array[2] << 8) | array[3]) - != ct->tuplehash[dir].tuple.src.ip) { - /* Enrico Scholz's passive FTP to partially RNAT'd ftp - server: it really wants us to connect to a - different IP address. Simply don't record it for - NAT. */ - DEBUGP("conntrack_ftp: NOT RECORDING: %u,%u,%u,%u != %u.%u.%u.%u\n", - array[0], array[1], array[2], array[3], - NIPQUAD(ct->tuplehash[dir].tuple.src.ip)); - - /* Thanks to Cristiano Lincoln Mattos - for reporting this potential - problem (DMZ machines opening holes to internal - networks, or the packet filter itself). */ - if (!loose) { - ret = NF_ACCEPT; - goto out_put_expect; - } - exp->tuple.dst.ip = htonl((array[0] << 24) | (array[1] << 16) - | (array[2] << 8) | array[3]); - } - - exp->tuple.src.ip = ct->tuplehash[!dir].tuple.src.ip; - exp->tuple.dst.u.tcp.port = htons(array[4] << 8 | array[5]); - exp->tuple.src.u.tcp.port = 0; /* Don't care. */ - exp->tuple.dst.protonum = IPPROTO_TCP; - exp->mask = ((struct ip_conntrack_tuple) - { { htonl(0xFFFFFFFF), { 0 } }, - { htonl(0xFFFFFFFF), { .tcp = { htons(0xFFFF) } }, 0xFF }}); - - exp->expectfn = NULL; - exp->flags = 0; - - /* Now, NAT might want to mangle the packet, and register the - * (possibly changed) expectation itself. */ - ip_nat_ftp = rcu_dereference(ip_nat_ftp_hook); - if (ip_nat_ftp) - ret = ip_nat_ftp(pskb, ctinfo, search[dir][i].ftptype, - matchoff, matchlen, exp, &seq); - else { - /* Can't expect this? Best to drop packet now. */ - if (ip_conntrack_expect_related(exp) != 0) - ret = NF_DROP; - else - ret = NF_ACCEPT; - } - -out_put_expect: - ip_conntrack_expect_put(exp); - -out_update_nl: - /* Now if this ends in \n, update ftp info. Seq may have been - * adjusted by NAT code. */ - if (ends_in_nl) - update_nl_seq(seq, ct_ftp_info,dir, *pskb); - out: - spin_unlock_bh(&ip_ftp_lock); - return ret; -} - -static struct ip_conntrack_helper ftp[MAX_PORTS]; -static char ftp_names[MAX_PORTS][sizeof("ftp-65535")]; - -/* Not __exit: called from init() */ -static void ip_conntrack_ftp_fini(void) -{ - int i; - for (i = 0; i < ports_c; i++) { - DEBUGP("ip_ct_ftp: unregistering helper for port %d\n", - ports[i]); - ip_conntrack_helper_unregister(&ftp[i]); - } - - kfree(ftp_buffer); -} - -static int __init ip_conntrack_ftp_init(void) -{ - int i, ret; - char *tmpname; - - ftp_buffer = kmalloc(65536, GFP_KERNEL); - if (!ftp_buffer) - return -ENOMEM; - - if (ports_c == 0) - ports[ports_c++] = FTP_PORT; - - for (i = 0; i < ports_c; i++) { - ftp[i].tuple.src.u.tcp.port = htons(ports[i]); - ftp[i].tuple.dst.protonum = IPPROTO_TCP; - ftp[i].mask.src.u.tcp.port = htons(0xFFFF); - ftp[i].mask.dst.protonum = 0xFF; - ftp[i].max_expected = 1; - ftp[i].timeout = 5 * 60; /* 5 minutes */ - ftp[i].me = THIS_MODULE; - ftp[i].help = help; - - tmpname = &ftp_names[i][0]; - if (ports[i] == FTP_PORT) - sprintf(tmpname, "ftp"); - else - sprintf(tmpname, "ftp-%d", ports[i]); - ftp[i].name = tmpname; - - DEBUGP("ip_ct_ftp: registering helper for port %d\n", - ports[i]); - ret = ip_conntrack_helper_register(&ftp[i]); - - if (ret) { - ip_conntrack_ftp_fini(); - return ret; - } - } - return 0; -} - -module_init(ip_conntrack_ftp_init); -module_exit(ip_conntrack_ftp_fini); diff -puN net/ipv4/netfilter/ip_conntrack_helper_h323.c~git-net /dev/null --- a/net/ipv4/netfilter/ip_conntrack_helper_h323.c +++ /dev/null @@ -1,1841 +0,0 @@ -/* - * H.323 connection tracking helper - * - * Copyright (c) 2006 Jing Min Zhao - * - * This source code is licensed under General Public License version 2. - * - * Based on the 'brute force' H.323 connection tracking module by - * Jozsef Kadlecsik - * - * For more information, please see http://nath323.sourceforge.net/ - */ - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#if 0 -#define DEBUGP printk -#else -#define DEBUGP(format, args...) -#endif - -/* Parameters */ -static unsigned int default_rrq_ttl = 300; -module_param(default_rrq_ttl, uint, 0600); -MODULE_PARM_DESC(default_rrq_ttl, "use this TTL if it's missing in RRQ"); - -static int gkrouted_only = 1; -module_param(gkrouted_only, int, 0600); -MODULE_PARM_DESC(gkrouted_only, "only accept calls from gatekeeper"); - -static int callforward_filter = 1; -module_param(callforward_filter, bool, 0600); -MODULE_PARM_DESC(callforward_filter, "only create call forwarding expectations " - "if both endpoints are on different sides " - "(determined by routing information)"); - -/* Hooks for NAT */ -int (*set_h245_addr_hook) (struct sk_buff ** pskb, - unsigned char **data, int dataoff, - H245_TransportAddress * addr, - __be32 ip, u_int16_t port); -int (*set_h225_addr_hook) (struct sk_buff ** pskb, - unsigned char **data, int dataoff, - TransportAddress * addr, - __be32 ip, u_int16_t port); -int (*set_sig_addr_hook) (struct sk_buff ** pskb, - struct ip_conntrack * ct, - enum ip_conntrack_info ctinfo, - unsigned char **data, - TransportAddress * addr, int count); -int (*set_ras_addr_hook) (struct sk_buff ** pskb, - struct ip_conntrack * ct, - enum ip_conntrack_info ctinfo, - unsigned char **data, - TransportAddress * addr, int count); -int (*nat_rtp_rtcp_hook) (struct sk_buff ** pskb, - struct ip_conntrack * ct, - enum ip_conntrack_info ctinfo, - unsigned char **data, int dataoff, - H245_TransportAddress * addr, - u_int16_t port, u_int16_t rtp_port, - struct ip_conntrack_expect * rtp_exp, - struct ip_conntrack_expect * rtcp_exp); -int (*nat_t120_hook) (struct sk_buff ** pskb, - struct ip_conntrack * ct, - enum ip_conntrack_info ctinfo, - unsigned char **data, int dataoff, - H245_TransportAddress * addr, u_int16_t port, - struct ip_conntrack_expect * exp); -int (*nat_h245_hook) (struct sk_buff ** pskb, - struct ip_conntrack * ct, - enum ip_conntrack_info ctinfo, - unsigned char **data, int dataoff, - TransportAddress * addr, u_int16_t port, - struct ip_conntrack_expect * exp); -int (*nat_callforwarding_hook) (struct sk_buff ** pskb, - struct ip_conntrack * ct, - enum ip_conntrack_info ctinfo, - unsigned char **data, int dataoff, - TransportAddress * addr, u_int16_t port, - struct ip_conntrack_expect * exp); -int (*nat_q931_hook) (struct sk_buff ** pskb, - struct ip_conntrack * ct, - enum ip_conntrack_info ctinfo, - unsigned char **data, TransportAddress * addr, int idx, - u_int16_t port, struct ip_conntrack_expect * exp); - - -static DEFINE_SPINLOCK(ip_h323_lock); -static char *h323_buffer; - -/****************************************************************************/ -static int get_tpkt_data(struct sk_buff **pskb, struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - unsigned char **data, int *datalen, int *dataoff) -{ - struct ip_ct_h323_master *info = &ct->help.ct_h323_info; - int dir = CTINFO2DIR(ctinfo); - struct tcphdr _tcph, *th; - int tcpdatalen; - int tcpdataoff; - unsigned char *tpkt; - int tpktlen; - int tpktoff; - - /* Get TCP header */ - th = skb_header_pointer(*pskb, (*pskb)->nh.iph->ihl * 4, - sizeof(_tcph), &_tcph); - if (th == NULL) - return 0; - - /* Get TCP data offset */ - tcpdataoff = (*pskb)->nh.iph->ihl * 4 + th->doff * 4; - - /* Get TCP data length */ - tcpdatalen = (*pskb)->len - tcpdataoff; - if (tcpdatalen <= 0) /* No TCP data */ - goto clear_out; - - if (*data == NULL) { /* first TPKT */ - /* Get first TPKT pointer */ - tpkt = skb_header_pointer(*pskb, tcpdataoff, tcpdatalen, - h323_buffer); - BUG_ON(tpkt == NULL); - - /* Validate TPKT identifier */ - if (tcpdatalen < 4 || tpkt[0] != 0x03 || tpkt[1] != 0) { - /* Netmeeting sends TPKT header and data separately */ - if (info->tpkt_len[dir] > 0) { - DEBUGP("ip_ct_h323: previous packet " - "indicated separate TPKT data of %hu " - "bytes\n", info->tpkt_len[dir]); - if (info->tpkt_len[dir] <= tcpdatalen) { - /* Yes, there was a TPKT header - * received */ - *data = tpkt; - *datalen = info->tpkt_len[dir]; - *dataoff = 0; - goto out; - } - - /* Fragmented TPKT */ - if (net_ratelimit()) - printk("ip_ct_h323: " - "fragmented TPKT\n"); - goto clear_out; - } - - /* It is not even a TPKT */ - return 0; - } - tpktoff = 0; - } else { /* Next TPKT */ - tpktoff = *dataoff + *datalen; - tcpdatalen -= tpktoff; - if (tcpdatalen <= 4) /* No more TPKT */ - goto clear_out; - tpkt = *data + *datalen; - - /* Validate TPKT identifier */ - if (tpkt[0] != 0x03 || tpkt[1] != 0) - goto clear_out; - } - - /* Validate TPKT length */ - tpktlen = tpkt[2] * 256 + tpkt[3]; - if (tpktlen < 4) - goto clear_out; - if (tpktlen > tcpdatalen) { - if (tcpdatalen == 4) { /* Separate TPKT header */ - /* Netmeeting sends TPKT header and data separately */ - DEBUGP("ip_ct_h323: separate TPKT header indicates " - "there will be TPKT data of %hu bytes\n", - tpktlen - 4); - info->tpkt_len[dir] = tpktlen - 4; - return 0; - } - - if (net_ratelimit()) - printk("ip_ct_h323: incomplete TPKT (fragmented?)\n"); - goto clear_out; - } - - /* This is the encapsulated data */ - *data = tpkt + 4; - *datalen = tpktlen - 4; - *dataoff = tpktoff + 4; - - out: - /* Clear TPKT length */ - info->tpkt_len[dir] = 0; - return 1; - - clear_out: - info->tpkt_len[dir] = 0; - return 0; -} - -/****************************************************************************/ -static int get_h245_addr(unsigned char *data, H245_TransportAddress * addr, - __be32 * ip, u_int16_t * port) -{ - unsigned char *p; - - if (addr->choice != eH245_TransportAddress_unicastAddress || - addr->unicastAddress.choice != eUnicastAddress_iPAddress) - return 0; - - p = data + addr->unicastAddress.iPAddress.network; - *ip = htonl((p[0] << 24) | (p[1] << 16) | (p[2] << 8) | (p[3])); - *port = (p[4] << 8) | (p[5]); - - return 1; -} - -/****************************************************************************/ -static int expect_rtp_rtcp(struct sk_buff **pskb, struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - unsigned char **data, int dataoff, - H245_TransportAddress * addr) -{ - int dir = CTINFO2DIR(ctinfo); - int ret = 0; - __be32 ip; - u_int16_t port; - u_int16_t rtp_port; - struct ip_conntrack_expect *rtp_exp; - struct ip_conntrack_expect *rtcp_exp; - typeof(nat_rtp_rtcp_hook) nat_rtp_rtcp; - - /* Read RTP or RTCP address */ - if (!get_h245_addr(*data, addr, &ip, &port) || - ip != ct->tuplehash[dir].tuple.src.ip || port == 0) - return 0; - - /* RTP port is even */ - rtp_port = port & (~1); - - /* Create expect for RTP */ - if ((rtp_exp = ip_conntrack_expect_alloc(ct)) == NULL) - return -1; - rtp_exp->tuple.src.ip = ct->tuplehash[!dir].tuple.src.ip; - rtp_exp->tuple.src.u.udp.port = 0; - rtp_exp->tuple.dst.ip = ct->tuplehash[!dir].tuple.dst.ip; - rtp_exp->tuple.dst.u.udp.port = htons(rtp_port); - rtp_exp->tuple.dst.protonum = IPPROTO_UDP; - rtp_exp->mask.src.ip = htonl(0xFFFFFFFF); - rtp_exp->mask.src.u.udp.port = 0; - rtp_exp->mask.dst.ip = htonl(0xFFFFFFFF); - rtp_exp->mask.dst.u.udp.port = htons(0xFFFF); - rtp_exp->mask.dst.protonum = 0xFF; - rtp_exp->flags = 0; - - /* Create expect for RTCP */ - if ((rtcp_exp = ip_conntrack_expect_alloc(ct)) == NULL) { - ip_conntrack_expect_put(rtp_exp); - return -1; - } - rtcp_exp->tuple.src.ip = ct->tuplehash[!dir].tuple.src.ip; - rtcp_exp->tuple.src.u.udp.port = 0; - rtcp_exp->tuple.dst.ip = ct->tuplehash[!dir].tuple.dst.ip; - rtcp_exp->tuple.dst.u.udp.port = htons(rtp_port + 1); - rtcp_exp->tuple.dst.protonum = IPPROTO_UDP; - rtcp_exp->mask.src.ip = htonl(0xFFFFFFFF); - rtcp_exp->mask.src.u.udp.port = 0; - rtcp_exp->mask.dst.ip = htonl(0xFFFFFFFF); - rtcp_exp->mask.dst.u.udp.port = htons(0xFFFF); - rtcp_exp->mask.dst.protonum = 0xFF; - rtcp_exp->flags = 0; - - if (ct->tuplehash[dir].tuple.src.ip != - ct->tuplehash[!dir].tuple.dst.ip && - (nat_rtp_rtcp = rcu_dereference(nat_rtp_rtcp_hook))) { - /* NAT needed */ - ret = nat_rtp_rtcp(pskb, ct, ctinfo, data, dataoff, - addr, port, rtp_port, rtp_exp, rtcp_exp); - } else { /* Conntrack only */ - rtp_exp->expectfn = NULL; - rtcp_exp->expectfn = NULL; - - if (ip_conntrack_expect_related(rtp_exp) == 0) { - if (ip_conntrack_expect_related(rtcp_exp) == 0) { - DEBUGP("ip_ct_h323: expect RTP " - "%u.%u.%u.%u:%hu->%u.%u.%u.%u:%hu\n", - NIPQUAD(rtp_exp->tuple.src.ip), - ntohs(rtp_exp->tuple.src.u.udp.port), - NIPQUAD(rtp_exp->tuple.dst.ip), - ntohs(rtp_exp->tuple.dst.u.udp.port)); - DEBUGP("ip_ct_h323: expect RTCP " - "%u.%u.%u.%u:%hu->%u.%u.%u.%u:%hu\n", - NIPQUAD(rtcp_exp->tuple.src.ip), - ntohs(rtcp_exp->tuple.src.u.udp.port), - NIPQUAD(rtcp_exp->tuple.dst.ip), - ntohs(rtcp_exp->tuple.dst.u.udp.port)); - } else { - ip_conntrack_unexpect_related(rtp_exp); - ret = -1; - } - } else - ret = -1; - } - - ip_conntrack_expect_put(rtp_exp); - ip_conntrack_expect_put(rtcp_exp); - - return ret; -} - -/****************************************************************************/ -static int expect_t120(struct sk_buff **pskb, - struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - unsigned char **data, int dataoff, - H245_TransportAddress * addr) -{ - int dir = CTINFO2DIR(ctinfo); - int ret = 0; - __be32 ip; - u_int16_t port; - struct ip_conntrack_expect *exp = NULL; - typeof(nat_t120_hook) nat_t120; - - /* Read T.120 address */ - if (!get_h245_addr(*data, addr, &ip, &port) || - ip != ct->tuplehash[dir].tuple.src.ip || port == 0) - return 0; - - /* Create expect for T.120 connections */ - if ((exp = ip_conntrack_expect_alloc(ct)) == NULL) - return -1; - exp->tuple.src.ip = ct->tuplehash[!dir].tuple.src.ip; - exp->tuple.src.u.tcp.port = 0; - exp->tuple.dst.ip = ct->tuplehash[!dir].tuple.dst.ip; - exp->tuple.dst.u.tcp.port = htons(port); - exp->tuple.dst.protonum = IPPROTO_TCP; - exp->mask.src.ip = htonl(0xFFFFFFFF); - exp->mask.src.u.tcp.port = 0; - exp->mask.dst.ip = htonl(0xFFFFFFFF); - exp->mask.dst.u.tcp.port = htons(0xFFFF); - exp->mask.dst.protonum = 0xFF; - exp->flags = IP_CT_EXPECT_PERMANENT; /* Accept multiple channels */ - - if (ct->tuplehash[dir].tuple.src.ip != - ct->tuplehash[!dir].tuple.dst.ip && - (nat_t120 = rcu_dereference(nat_t120_hook))) { - /* NAT needed */ - ret = nat_t120(pskb, ct, ctinfo, data, dataoff, addr, - port, exp); - } else { /* Conntrack only */ - exp->expectfn = NULL; - if (ip_conntrack_expect_related(exp) == 0) { - DEBUGP("ip_ct_h323: expect T.120 " - "%u.%u.%u.%u:%hu->%u.%u.%u.%u:%hu\n", - NIPQUAD(exp->tuple.src.ip), - ntohs(exp->tuple.src.u.tcp.port), - NIPQUAD(exp->tuple.dst.ip), - ntohs(exp->tuple.dst.u.tcp.port)); - } else - ret = -1; - } - - ip_conntrack_expect_put(exp); - - return ret; -} - -/****************************************************************************/ -static int process_h245_channel(struct sk_buff **pskb, - struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - unsigned char **data, int dataoff, - H2250LogicalChannelParameters * channel) -{ - int ret; - - if (channel->options & eH2250LogicalChannelParameters_mediaChannel) { - /* RTP */ - ret = expect_rtp_rtcp(pskb, ct, ctinfo, data, dataoff, - &channel->mediaChannel); - if (ret < 0) - return -1; - } - - if (channel-> - options & eH2250LogicalChannelParameters_mediaControlChannel) { - /* RTCP */ - ret = expect_rtp_rtcp(pskb, ct, ctinfo, data, dataoff, - &channel->mediaControlChannel); - if (ret < 0) - return -1; - } - - return 0; -} - -/****************************************************************************/ -static int process_olc(struct sk_buff **pskb, struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - unsigned char **data, int dataoff, - OpenLogicalChannel * olc) -{ - int ret; - - DEBUGP("ip_ct_h323: OpenLogicalChannel\n"); - - if (olc->forwardLogicalChannelParameters.multiplexParameters.choice == - eOpenLogicalChannel_forwardLogicalChannelParameters_multiplexParameters_h2250LogicalChannelParameters) - { - ret = process_h245_channel(pskb, ct, ctinfo, data, dataoff, - &olc-> - forwardLogicalChannelParameters. - multiplexParameters. - h2250LogicalChannelParameters); - if (ret < 0) - return -1; - } - - if ((olc->options & - eOpenLogicalChannel_reverseLogicalChannelParameters) && - (olc->reverseLogicalChannelParameters.options & - eOpenLogicalChannel_reverseLogicalChannelParameters_multiplexParameters) - && (olc->reverseLogicalChannelParameters.multiplexParameters. - choice == - eOpenLogicalChannel_reverseLogicalChannelParameters_multiplexParameters_h2250LogicalChannelParameters)) - { - ret = - process_h245_channel(pskb, ct, ctinfo, data, dataoff, - &olc-> - reverseLogicalChannelParameters. - multiplexParameters. - h2250LogicalChannelParameters); - if (ret < 0) - return -1; - } - - if ((olc->options & eOpenLogicalChannel_separateStack) && - olc->forwardLogicalChannelParameters.dataType.choice == - eDataType_data && - olc->forwardLogicalChannelParameters.dataType.data.application. - choice == eDataApplicationCapability_application_t120 && - olc->forwardLogicalChannelParameters.dataType.data.application. - t120.choice == eDataProtocolCapability_separateLANStack && - olc->separateStack.networkAddress.choice == - eNetworkAccessParameters_networkAddress_localAreaAddress) { - ret = expect_t120(pskb, ct, ctinfo, data, dataoff, - &olc->separateStack.networkAddress. - localAreaAddress); - if (ret < 0) - return -1; - } - - return 0; -} - -/****************************************************************************/ -static int process_olca(struct sk_buff **pskb, struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - unsigned char **data, int dataoff, - OpenLogicalChannelAck * olca) -{ - H2250LogicalChannelAckParameters *ack; - int ret; - - DEBUGP("ip_ct_h323: OpenLogicalChannelAck\n"); - - if ((olca->options & - eOpenLogicalChannelAck_reverseLogicalChannelParameters) && - (olca->reverseLogicalChannelParameters.options & - eOpenLogicalChannelAck_reverseLogicalChannelParameters_multiplexParameters) - && (olca->reverseLogicalChannelParameters.multiplexParameters. - choice == - eOpenLogicalChannelAck_reverseLogicalChannelParameters_multiplexParameters_h2250LogicalChannelParameters)) - { - ret = process_h245_channel(pskb, ct, ctinfo, data, dataoff, - &olca-> - reverseLogicalChannelParameters. - multiplexParameters. - h2250LogicalChannelParameters); - if (ret < 0) - return -1; - } - - if ((olca->options & - eOpenLogicalChannelAck_forwardMultiplexAckParameters) && - (olca->forwardMultiplexAckParameters.choice == - eOpenLogicalChannelAck_forwardMultiplexAckParameters_h2250LogicalChannelAckParameters)) - { - ack = &olca->forwardMultiplexAckParameters. - h2250LogicalChannelAckParameters; - if (ack->options & - eH2250LogicalChannelAckParameters_mediaChannel) { - /* RTP */ - ret = expect_rtp_rtcp(pskb, ct, ctinfo, data, dataoff, - &ack->mediaChannel); - if (ret < 0) - return -1; - } - - if (ack->options & - eH2250LogicalChannelAckParameters_mediaControlChannel) { - /* RTCP */ - ret = expect_rtp_rtcp(pskb, ct, ctinfo, data, dataoff, - &ack->mediaControlChannel); - if (ret < 0) - return -1; - } - } - - return 0; -} - -/****************************************************************************/ -static int process_h245(struct sk_buff **pskb, struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - unsigned char **data, int dataoff, - MultimediaSystemControlMessage * mscm) -{ - switch (mscm->choice) { - case eMultimediaSystemControlMessage_request: - if (mscm->request.choice == - eRequestMessage_openLogicalChannel) { - return process_olc(pskb, ct, ctinfo, data, dataoff, - &mscm->request.openLogicalChannel); - } - DEBUGP("ip_ct_h323: H.245 Request %d\n", - mscm->request.choice); - break; - case eMultimediaSystemControlMessage_response: - if (mscm->response.choice == - eResponseMessage_openLogicalChannelAck) { - return process_olca(pskb, ct, ctinfo, data, dataoff, - &mscm->response. - openLogicalChannelAck); - } - DEBUGP("ip_ct_h323: H.245 Response %d\n", - mscm->response.choice); - break; - default: - DEBUGP("ip_ct_h323: H.245 signal %d\n", mscm->choice); - break; - } - - return 0; -} - -/****************************************************************************/ -static int h245_help(struct sk_buff **pskb, struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo) -{ - static MultimediaSystemControlMessage mscm; - unsigned char *data = NULL; - int datalen; - int dataoff; - int ret; - - /* Until there's been traffic both ways, don't look in packets. */ - if (ctinfo != IP_CT_ESTABLISHED - && ctinfo != IP_CT_ESTABLISHED + IP_CT_IS_REPLY) { - return NF_ACCEPT; - } - DEBUGP("ip_ct_h245: skblen = %u\n", (*pskb)->len); - - spin_lock_bh(&ip_h323_lock); - - /* Process each TPKT */ - while (get_tpkt_data(pskb, ct, ctinfo, &data, &datalen, &dataoff)) { - DEBUGP("ip_ct_h245: TPKT %u.%u.%u.%u->%u.%u.%u.%u, len=%d\n", - NIPQUAD((*pskb)->nh.iph->saddr), - NIPQUAD((*pskb)->nh.iph->daddr), datalen); - - /* Decode H.245 signal */ - ret = DecodeMultimediaSystemControlMessage(data, datalen, - &mscm); - if (ret < 0) { - if (net_ratelimit()) - printk("ip_ct_h245: decoding error: %s\n", - ret == H323_ERROR_BOUND ? - "out of bound" : "out of range"); - /* We don't drop when decoding error */ - break; - } - - /* Process H.245 signal */ - if (process_h245(pskb, ct, ctinfo, &data, dataoff, &mscm) < 0) - goto drop; - } - - spin_unlock_bh(&ip_h323_lock); - return NF_ACCEPT; - - drop: - spin_unlock_bh(&ip_h323_lock); - if (net_ratelimit()) - printk("ip_ct_h245: packet dropped\n"); - return NF_DROP; -} - -/****************************************************************************/ -static struct ip_conntrack_helper ip_conntrack_helper_h245 = { - .name = "H.245", - .me = THIS_MODULE, - .max_expected = H323_RTP_CHANNEL_MAX * 4 + 2 /* T.120 */ , - .timeout = 240, - .tuple = {.dst = {.protonum = IPPROTO_TCP}}, - .mask = {.src = {.u = {0xFFFF}}, - .dst = {.protonum = 0xFF}}, - .help = h245_help -}; - -/****************************************************************************/ -void ip_conntrack_h245_expect(struct ip_conntrack *new, - struct ip_conntrack_expect *this) -{ - write_lock_bh(&ip_conntrack_lock); - new->helper = &ip_conntrack_helper_h245; - write_unlock_bh(&ip_conntrack_lock); -} - -/****************************************************************************/ -int get_h225_addr(unsigned char *data, TransportAddress * addr, - __be32 * ip, u_int16_t * port) -{ - unsigned char *p; - - if (addr->choice != eTransportAddress_ipAddress) - return 0; - - p = data + addr->ipAddress.ip; - *ip = htonl((p[0] << 24) | (p[1] << 16) | (p[2] << 8) | (p[3])); - *port = (p[4] << 8) | (p[5]); - - return 1; -} - -/****************************************************************************/ -static int expect_h245(struct sk_buff **pskb, struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - unsigned char **data, int dataoff, - TransportAddress * addr) -{ - int dir = CTINFO2DIR(ctinfo); - int ret = 0; - __be32 ip; - u_int16_t port; - struct ip_conntrack_expect *exp = NULL; - typeof(nat_h245_hook) nat_h245; - - /* Read h245Address */ - if (!get_h225_addr(*data, addr, &ip, &port) || - ip != ct->tuplehash[dir].tuple.src.ip || port == 0) - return 0; - - /* Create expect for h245 connection */ - if ((exp = ip_conntrack_expect_alloc(ct)) == NULL) - return -1; - exp->tuple.src.ip = ct->tuplehash[!dir].tuple.src.ip; - exp->tuple.src.u.tcp.port = 0; - exp->tuple.dst.ip = ct->tuplehash[!dir].tuple.dst.ip; - exp->tuple.dst.u.tcp.port = htons(port); - exp->tuple.dst.protonum = IPPROTO_TCP; - exp->mask.src.ip = htonl(0xFFFFFFFF); - exp->mask.src.u.tcp.port = 0; - exp->mask.dst.ip = htonl(0xFFFFFFFF); - exp->mask.dst.u.tcp.port = htons(0xFFFF); - exp->mask.dst.protonum = 0xFF; - exp->flags = 0; - - if (ct->tuplehash[dir].tuple.src.ip != - ct->tuplehash[!dir].tuple.dst.ip && - (nat_h245 = rcu_dereference(nat_h245_hook))) { - /* NAT needed */ - ret = nat_h245(pskb, ct, ctinfo, data, dataoff, addr, - port, exp); - } else { /* Conntrack only */ - exp->expectfn = ip_conntrack_h245_expect; - - if (ip_conntrack_expect_related(exp) == 0) { - DEBUGP("ip_ct_q931: expect H.245 " - "%u.%u.%u.%u:%hu->%u.%u.%u.%u:%hu\n", - NIPQUAD(exp->tuple.src.ip), - ntohs(exp->tuple.src.u.tcp.port), - NIPQUAD(exp->tuple.dst.ip), - ntohs(exp->tuple.dst.u.tcp.port)); - } else - ret = -1; - } - - ip_conntrack_expect_put(exp); - - return ret; -} - -/* Forwarding declaration */ -void ip_conntrack_q931_expect(struct ip_conntrack *new, - struct ip_conntrack_expect *this); - -/****************************************************************************/ -static int expect_callforwarding(struct sk_buff **pskb, - struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - unsigned char **data, int dataoff, - TransportAddress * addr) -{ - int dir = CTINFO2DIR(ctinfo); - int ret = 0; - __be32 ip; - u_int16_t port; - struct ip_conntrack_expect *exp = NULL; - typeof(nat_callforwarding_hook) nat_callforwarding; - - /* Read alternativeAddress */ - if (!get_h225_addr(*data, addr, &ip, &port) || port == 0) - return 0; - - /* If the calling party is on the same side of the forward-to party, - * we don't need to track the second call */ - if (callforward_filter) { - struct rtable *rt1, *rt2; - struct flowi fl1 = { - .fl4_dst = ip, - }; - struct flowi fl2 = { - .fl4_dst = ct->tuplehash[!dir].tuple.src.ip, - }; - - if (ip_route_output_key(&rt1, &fl1) == 0) { - if (ip_route_output_key(&rt2, &fl2) == 0) { - if (rt1->rt_gateway == rt2->rt_gateway && - rt1->u.dst.dev == rt2->u.dst.dev) - ret = 1; - dst_release(&rt2->u.dst); - } - dst_release(&rt1->u.dst); - } - if (ret) { - DEBUGP("ip_ct_q931: Call Forwarding not tracked\n"); - return 0; - } - } - - /* Create expect for the second call leg */ - if ((exp = ip_conntrack_expect_alloc(ct)) == NULL) - return -1; - exp->tuple.src.ip = ct->tuplehash[!dir].tuple.src.ip; - exp->tuple.src.u.tcp.port = 0; - exp->tuple.dst.ip = ip; - exp->tuple.dst.u.tcp.port = htons(port); - exp->tuple.dst.protonum = IPPROTO_TCP; - exp->mask.src.ip = htonl(0xFFFFFFFF); - exp->mask.src.u.tcp.port = 0; - exp->mask.dst.ip = htonl(0xFFFFFFFF); - exp->mask.dst.u.tcp.port = htons(0xFFFF); - exp->mask.dst.protonum = 0xFF; - exp->flags = 0; - - if (ct->tuplehash[dir].tuple.src.ip != - ct->tuplehash[!dir].tuple.dst.ip && - (nat_callforwarding = rcu_dereference(nat_callforwarding_hook))) { - /* Need NAT */ - ret = nat_callforwarding(pskb, ct, ctinfo, data, dataoff, - addr, port, exp); - } else { /* Conntrack only */ - exp->expectfn = ip_conntrack_q931_expect; - - if (ip_conntrack_expect_related(exp) == 0) { - DEBUGP("ip_ct_q931: expect Call Forwarding " - "%u.%u.%u.%u:%hu->%u.%u.%u.%u:%hu\n", - NIPQUAD(exp->tuple.src.ip), - ntohs(exp->tuple.src.u.tcp.port), - NIPQUAD(exp->tuple.dst.ip), - ntohs(exp->tuple.dst.u.tcp.port)); - } else - ret = -1; - } - - ip_conntrack_expect_put(exp); - - return ret; -} - -/****************************************************************************/ -static int process_setup(struct sk_buff **pskb, struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - unsigned char **data, int dataoff, - Setup_UUIE * setup) -{ - int dir = CTINFO2DIR(ctinfo); - int ret; - int i; - __be32 ip; - u_int16_t port; - typeof(set_h225_addr_hook) set_h225_addr; - - DEBUGP("ip_ct_q931: Setup\n"); - - if (setup->options & eSetup_UUIE_h245Address) { - ret = expect_h245(pskb, ct, ctinfo, data, dataoff, - &setup->h245Address); - if (ret < 0) - return -1; - } - - set_h225_addr = rcu_dereference(set_h225_addr_hook); - - if ((setup->options & eSetup_UUIE_destCallSignalAddress) && - (set_h225_addr) && - get_h225_addr(*data, &setup->destCallSignalAddress, &ip, &port) && - ip != ct->tuplehash[!dir].tuple.src.ip) { - DEBUGP("ip_ct_q931: set destCallSignalAddress " - "%u.%u.%u.%u:%hu->%u.%u.%u.%u:%hu\n", - NIPQUAD(ip), port, - NIPQUAD(ct->tuplehash[!dir].tuple.src.ip), - ntohs(ct->tuplehash[!dir].tuple.src.u.tcp.port)); - ret = set_h225_addr(pskb, data, dataoff, - &setup->destCallSignalAddress, - ct->tuplehash[!dir].tuple.src.ip, - ntohs(ct->tuplehash[!dir].tuple.src. - u.tcp.port)); - if (ret < 0) - return -1; - } - - if ((setup->options & eSetup_UUIE_sourceCallSignalAddress) && - (set_h225_addr) && - get_h225_addr(*data, &setup->sourceCallSignalAddress, &ip, &port) - && ip != ct->tuplehash[!dir].tuple.dst.ip) { - DEBUGP("ip_ct_q931: set sourceCallSignalAddress " - "%u.%u.%u.%u:%hu->%u.%u.%u.%u:%hu\n", - NIPQUAD(ip), port, - NIPQUAD(ct->tuplehash[!dir].tuple.dst.ip), - ntohs(ct->tuplehash[!dir].tuple.dst.u.tcp.port)); - ret = set_h225_addr(pskb, data, dataoff, - &setup->sourceCallSignalAddress, - ct->tuplehash[!dir].tuple.dst.ip, - ntohs(ct->tuplehash[!dir].tuple.dst. - u.tcp.port)); - if (ret < 0) - return -1; - } - - if (setup->options & eSetup_UUIE_fastStart) { - for (i = 0; i < setup->fastStart.count; i++) { - ret = process_olc(pskb, ct, ctinfo, data, dataoff, - &setup->fastStart.item[i]); - if (ret < 0) - return -1; - } - } - - return 0; -} - -/****************************************************************************/ -static int process_callproceeding(struct sk_buff **pskb, - struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - unsigned char **data, int dataoff, - CallProceeding_UUIE * callproc) -{ - int ret; - int i; - - DEBUGP("ip_ct_q931: CallProceeding\n"); - - if (callproc->options & eCallProceeding_UUIE_h245Address) { - ret = expect_h245(pskb, ct, ctinfo, data, dataoff, - &callproc->h245Address); - if (ret < 0) - return -1; - } - - if (callproc->options & eCallProceeding_UUIE_fastStart) { - for (i = 0; i < callproc->fastStart.count; i++) { - ret = process_olc(pskb, ct, ctinfo, data, dataoff, - &callproc->fastStart.item[i]); - if (ret < 0) - return -1; - } - } - - return 0; -} - -/****************************************************************************/ -static int process_connect(struct sk_buff **pskb, struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - unsigned char **data, int dataoff, - Connect_UUIE * connect) -{ - int ret; - int i; - - DEBUGP("ip_ct_q931: Connect\n"); - - if (connect->options & eConnect_UUIE_h245Address) { - ret = expect_h245(pskb, ct, ctinfo, data, dataoff, - &connect->h245Address); - if (ret < 0) - return -1; - } - - if (connect->options & eConnect_UUIE_fastStart) { - for (i = 0; i < connect->fastStart.count; i++) { - ret = process_olc(pskb, ct, ctinfo, data, dataoff, - &connect->fastStart.item[i]); - if (ret < 0) - return -1; - } - } - - return 0; -} - -/****************************************************************************/ -static int process_alerting(struct sk_buff **pskb, struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - unsigned char **data, int dataoff, - Alerting_UUIE * alert) -{ - int ret; - int i; - - DEBUGP("ip_ct_q931: Alerting\n"); - - if (alert->options & eAlerting_UUIE_h245Address) { - ret = expect_h245(pskb, ct, ctinfo, data, dataoff, - &alert->h245Address); - if (ret < 0) - return -1; - } - - if (alert->options & eAlerting_UUIE_fastStart) { - for (i = 0; i < alert->fastStart.count; i++) { - ret = process_olc(pskb, ct, ctinfo, data, dataoff, - &alert->fastStart.item[i]); - if (ret < 0) - return -1; - } - } - - return 0; -} - -/****************************************************************************/ -static int process_information(struct sk_buff **pskb, - struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - unsigned char **data, int dataoff, - Information_UUIE * info) -{ - int ret; - int i; - - DEBUGP("ip_ct_q931: Information\n"); - - if (info->options & eInformation_UUIE_fastStart) { - for (i = 0; i < info->fastStart.count; i++) { - ret = process_olc(pskb, ct, ctinfo, data, dataoff, - &info->fastStart.item[i]); - if (ret < 0) - return -1; - } - } - - return 0; -} - -/****************************************************************************/ -static int process_facility(struct sk_buff **pskb, struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - unsigned char **data, int dataoff, - Facility_UUIE * facility) -{ - int ret; - int i; - - DEBUGP("ip_ct_q931: Facility\n"); - - if (facility->reason.choice == eFacilityReason_callForwarded) { - if (facility->options & eFacility_UUIE_alternativeAddress) - return expect_callforwarding(pskb, ct, ctinfo, data, - dataoff, - &facility-> - alternativeAddress); - return 0; - } - - if (facility->options & eFacility_UUIE_h245Address) { - ret = expect_h245(pskb, ct, ctinfo, data, dataoff, - &facility->h245Address); - if (ret < 0) - return -1; - } - - if (facility->options & eFacility_UUIE_fastStart) { - for (i = 0; i < facility->fastStart.count; i++) { - ret = process_olc(pskb, ct, ctinfo, data, dataoff, - &facility->fastStart.item[i]); - if (ret < 0) - return -1; - } - } - - return 0; -} - -/****************************************************************************/ -static int process_progress(struct sk_buff **pskb, struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - unsigned char **data, int dataoff, - Progress_UUIE * progress) -{ - int ret; - int i; - - DEBUGP("ip_ct_q931: Progress\n"); - - if (progress->options & eProgress_UUIE_h245Address) { - ret = expect_h245(pskb, ct, ctinfo, data, dataoff, - &progress->h245Address); - if (ret < 0) - return -1; - } - - if (progress->options & eProgress_UUIE_fastStart) { - for (i = 0; i < progress->fastStart.count; i++) { - ret = process_olc(pskb, ct, ctinfo, data, dataoff, - &progress->fastStart.item[i]); - if (ret < 0) - return -1; - } - } - - return 0; -} - -/****************************************************************************/ -static int process_q931(struct sk_buff **pskb, struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - unsigned char **data, int dataoff, Q931 * q931) -{ - H323_UU_PDU *pdu = &q931->UUIE.h323_uu_pdu; - int i; - int ret = 0; - - switch (pdu->h323_message_body.choice) { - case eH323_UU_PDU_h323_message_body_setup: - ret = process_setup(pskb, ct, ctinfo, data, dataoff, - &pdu->h323_message_body.setup); - break; - case eH323_UU_PDU_h323_message_body_callProceeding: - ret = process_callproceeding(pskb, ct, ctinfo, data, dataoff, - &pdu->h323_message_body. - callProceeding); - break; - case eH323_UU_PDU_h323_message_body_connect: - ret = process_connect(pskb, ct, ctinfo, data, dataoff, - &pdu->h323_message_body.connect); - break; - case eH323_UU_PDU_h323_message_body_alerting: - ret = process_alerting(pskb, ct, ctinfo, data, dataoff, - &pdu->h323_message_body.alerting); - break; - case eH323_UU_PDU_h323_message_body_information: - ret = process_information(pskb, ct, ctinfo, data, dataoff, - &pdu->h323_message_body. - information); - break; - case eH323_UU_PDU_h323_message_body_facility: - ret = process_facility(pskb, ct, ctinfo, data, dataoff, - &pdu->h323_message_body.facility); - break; - case eH323_UU_PDU_h323_message_body_progress: - ret = process_progress(pskb, ct, ctinfo, data, dataoff, - &pdu->h323_message_body.progress); - break; - default: - DEBUGP("ip_ct_q931: Q.931 signal %d\n", - pdu->h323_message_body.choice); - break; - } - - if (ret < 0) - return -1; - - if (pdu->options & eH323_UU_PDU_h245Control) { - for (i = 0; i < pdu->h245Control.count; i++) { - ret = process_h245(pskb, ct, ctinfo, data, dataoff, - &pdu->h245Control.item[i]); - if (ret < 0) - return -1; - } - } - - return 0; -} - -/****************************************************************************/ -static int q931_help(struct sk_buff **pskb, struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo) -{ - static Q931 q931; - unsigned char *data = NULL; - int datalen; - int dataoff; - int ret; - - /* Until there's been traffic both ways, don't look in packets. */ - if (ctinfo != IP_CT_ESTABLISHED - && ctinfo != IP_CT_ESTABLISHED + IP_CT_IS_REPLY) { - return NF_ACCEPT; - } - DEBUGP("ip_ct_q931: skblen = %u\n", (*pskb)->len); - - spin_lock_bh(&ip_h323_lock); - - /* Process each TPKT */ - while (get_tpkt_data(pskb, ct, ctinfo, &data, &datalen, &dataoff)) { - DEBUGP("ip_ct_q931: TPKT %u.%u.%u.%u->%u.%u.%u.%u, len=%d\n", - NIPQUAD((*pskb)->nh.iph->saddr), - NIPQUAD((*pskb)->nh.iph->daddr), datalen); - - /* Decode Q.931 signal */ - ret = DecodeQ931(data, datalen, &q931); - if (ret < 0) { - if (net_ratelimit()) - printk("ip_ct_q931: decoding error: %s\n", - ret == H323_ERROR_BOUND ? - "out of bound" : "out of range"); - /* We don't drop when decoding error */ - break; - } - - /* Process Q.931 signal */ - if (process_q931(pskb, ct, ctinfo, &data, dataoff, &q931) < 0) - goto drop; - } - - spin_unlock_bh(&ip_h323_lock); - return NF_ACCEPT; - - drop: - spin_unlock_bh(&ip_h323_lock); - if (net_ratelimit()) - printk("ip_ct_q931: packet dropped\n"); - return NF_DROP; -} - -/****************************************************************************/ -static struct ip_conntrack_helper ip_conntrack_helper_q931 = { - .name = "Q.931", - .me = THIS_MODULE, - .max_expected = H323_RTP_CHANNEL_MAX * 4 + 4 /* T.120 and H.245 */ , - .timeout = 240, - .tuple = {.src = {.u = {.tcp = {.port = __constant_htons(Q931_PORT)}}}, - .dst = {.protonum = IPPROTO_TCP}}, - .mask = {.src = {.u = {0xFFFF}}, - .dst = {.protonum = 0xFF}}, - .help = q931_help -}; - -/****************************************************************************/ -void ip_conntrack_q931_expect(struct ip_conntrack *new, - struct ip_conntrack_expect *this) -{ - write_lock_bh(&ip_conntrack_lock); - new->helper = &ip_conntrack_helper_q931; - write_unlock_bh(&ip_conntrack_lock); -} - -/****************************************************************************/ -static unsigned char *get_udp_data(struct sk_buff **pskb, int *datalen) -{ - struct udphdr _uh, *uh; - int dataoff; - - uh = skb_header_pointer(*pskb, (*pskb)->nh.iph->ihl * 4, sizeof(_uh), - &_uh); - if (uh == NULL) - return NULL; - dataoff = (*pskb)->nh.iph->ihl * 4 + sizeof(_uh); - if (dataoff >= (*pskb)->len) - return NULL; - *datalen = (*pskb)->len - dataoff; - return skb_header_pointer(*pskb, dataoff, *datalen, h323_buffer); -} - -/****************************************************************************/ -static struct ip_conntrack_expect *find_expect(struct ip_conntrack *ct, - __be32 ip, u_int16_t port) -{ - struct ip_conntrack_expect *exp; - struct ip_conntrack_tuple tuple; - - tuple.src.ip = 0; - tuple.src.u.tcp.port = 0; - tuple.dst.ip = ip; - tuple.dst.u.tcp.port = htons(port); - tuple.dst.protonum = IPPROTO_TCP; - - exp = __ip_conntrack_expect_find(&tuple); - if (exp && exp->master == ct) - return exp; - return NULL; -} - -/****************************************************************************/ -static int set_expect_timeout(struct ip_conntrack_expect *exp, - unsigned timeout) -{ - if (!exp || !del_timer(&exp->timeout)) - return 0; - - exp->timeout.expires = jiffies + timeout * HZ; - add_timer(&exp->timeout); - - return 1; -} - -/****************************************************************************/ -static int expect_q931(struct sk_buff **pskb, struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - unsigned char **data, - TransportAddress * addr, int count) -{ - struct ip_ct_h323_master *info = &ct->help.ct_h323_info; - int dir = CTINFO2DIR(ctinfo); - int ret = 0; - int i; - __be32 ip; - u_int16_t port; - struct ip_conntrack_expect *exp; - typeof(nat_q931_hook) nat_q931; - - /* Look for the first related address */ - for (i = 0; i < count; i++) { - if (get_h225_addr(*data, &addr[i], &ip, &port) && - ip == ct->tuplehash[dir].tuple.src.ip && port != 0) - break; - } - - if (i >= count) /* Not found */ - return 0; - - /* Create expect for Q.931 */ - if ((exp = ip_conntrack_expect_alloc(ct)) == NULL) - return -1; - exp->tuple.src.ip = gkrouted_only ? /* only accept calls from GK? */ - ct->tuplehash[!dir].tuple.src.ip : 0; - exp->tuple.src.u.tcp.port = 0; - exp->tuple.dst.ip = ct->tuplehash[!dir].tuple.dst.ip; - exp->tuple.dst.u.tcp.port = htons(port); - exp->tuple.dst.protonum = IPPROTO_TCP; - exp->mask.src.ip = gkrouted_only ? htonl(0xFFFFFFFF) : 0; - exp->mask.src.u.tcp.port = 0; - exp->mask.dst.ip = htonl(0xFFFFFFFF); - exp->mask.dst.u.tcp.port = htons(0xFFFF); - exp->mask.dst.protonum = 0xFF; - exp->flags = IP_CT_EXPECT_PERMANENT; /* Accept multiple calls */ - - nat_q931 = rcu_dereference(nat_q931_hook); - if (nat_q931) { /* Need NAT */ - ret = nat_q931(pskb, ct, ctinfo, data, addr, i, port, exp); - } else { /* Conntrack only */ - exp->expectfn = ip_conntrack_q931_expect; - - if (ip_conntrack_expect_related(exp) == 0) { - DEBUGP("ip_ct_ras: expect Q.931 " - "%u.%u.%u.%u:%hu->%u.%u.%u.%u:%hu\n", - NIPQUAD(exp->tuple.src.ip), - ntohs(exp->tuple.src.u.tcp.port), - NIPQUAD(exp->tuple.dst.ip), - ntohs(exp->tuple.dst.u.tcp.port)); - - /* Save port for looking up expect in processing RCF */ - info->sig_port[dir] = port; - } else - ret = -1; - } - - ip_conntrack_expect_put(exp); - - return ret; -} - -/****************************************************************************/ -static int process_grq(struct sk_buff **pskb, struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - unsigned char **data, GatekeeperRequest * grq) -{ - typeof(set_ras_addr_hook) set_ras_addr; - - DEBUGP("ip_ct_ras: GRQ\n"); - - set_ras_addr = rcu_dereference(set_ras_addr_hook); - if (set_ras_addr) /* NATed */ - return set_ras_addr(pskb, ct, ctinfo, data, - &grq->rasAddress, 1); - return 0; -} - -/* Declare before using */ -static void ip_conntrack_ras_expect(struct ip_conntrack *new, - struct ip_conntrack_expect *this); - -/****************************************************************************/ -static int process_gcf(struct sk_buff **pskb, struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - unsigned char **data, GatekeeperConfirm * gcf) -{ - int dir = CTINFO2DIR(ctinfo); - int ret = 0; - __be32 ip; - u_int16_t port; - struct ip_conntrack_expect *exp; - - DEBUGP("ip_ct_ras: GCF\n"); - - if (!get_h225_addr(*data, &gcf->rasAddress, &ip, &port)) - return 0; - - /* Registration port is the same as discovery port */ - if (ip == ct->tuplehash[dir].tuple.src.ip && - port == ntohs(ct->tuplehash[dir].tuple.src.u.udp.port)) - return 0; - - /* Avoid RAS expectation loops. A GCF is never expected. */ - if (test_bit(IPS_EXPECTED_BIT, &ct->status)) - return 0; - - /* Need new expect */ - if ((exp = ip_conntrack_expect_alloc(ct)) == NULL) - return -1; - exp->tuple.src.ip = ct->tuplehash[!dir].tuple.src.ip; - exp->tuple.src.u.tcp.port = 0; - exp->tuple.dst.ip = ip; - exp->tuple.dst.u.tcp.port = htons(port); - exp->tuple.dst.protonum = IPPROTO_UDP; - exp->mask.src.ip = htonl(0xFFFFFFFF); - exp->mask.src.u.tcp.port = 0; - exp->mask.dst.ip = htonl(0xFFFFFFFF); - exp->mask.dst.u.tcp.port = htons(0xFFFF); - exp->mask.dst.protonum = 0xFF; - exp->flags = 0; - exp->expectfn = ip_conntrack_ras_expect; - if (ip_conntrack_expect_related(exp) == 0) { - DEBUGP("ip_ct_ras: expect RAS " - "%u.%u.%u.%u:%hu->%u.%u.%u.%u:%hu\n", - NIPQUAD(exp->tuple.src.ip), - ntohs(exp->tuple.src.u.tcp.port), - NIPQUAD(exp->tuple.dst.ip), - ntohs(exp->tuple.dst.u.tcp.port)); - } else - ret = -1; - - ip_conntrack_expect_put(exp); - - return ret; -} - -/****************************************************************************/ -static int process_rrq(struct sk_buff **pskb, struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - unsigned char **data, RegistrationRequest * rrq) -{ - struct ip_ct_h323_master *info = &ct->help.ct_h323_info; - int ret; - typeof(set_ras_addr_hook) set_ras_addr; - - DEBUGP("ip_ct_ras: RRQ\n"); - - ret = expect_q931(pskb, ct, ctinfo, data, - rrq->callSignalAddress.item, - rrq->callSignalAddress.count); - if (ret < 0) - return -1; - - set_ras_addr = rcu_dereference(set_ras_addr_hook); - if (set_ras_addr) { - ret = set_ras_addr(pskb, ct, ctinfo, data, - rrq->rasAddress.item, - rrq->rasAddress.count); - if (ret < 0) - return -1; - } - - if (rrq->options & eRegistrationRequest_timeToLive) { - DEBUGP("ip_ct_ras: RRQ TTL = %u seconds\n", rrq->timeToLive); - info->timeout = rrq->timeToLive; - } else - info->timeout = default_rrq_ttl; - - return 0; -} - -/****************************************************************************/ -static int process_rcf(struct sk_buff **pskb, struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - unsigned char **data, RegistrationConfirm * rcf) -{ - struct ip_ct_h323_master *info = &ct->help.ct_h323_info; - int dir = CTINFO2DIR(ctinfo); - int ret; - struct ip_conntrack_expect *exp; - typeof(set_sig_addr_hook) set_sig_addr; - - DEBUGP("ip_ct_ras: RCF\n"); - - set_sig_addr = rcu_dereference(set_sig_addr_hook); - if (set_sig_addr) { - ret = set_sig_addr(pskb, ct, ctinfo, data, - rcf->callSignalAddress.item, - rcf->callSignalAddress.count); - if (ret < 0) - return -1; - } - - if (rcf->options & eRegistrationConfirm_timeToLive) { - DEBUGP("ip_ct_ras: RCF TTL = %u seconds\n", rcf->timeToLive); - info->timeout = rcf->timeToLive; - } - - if (info->timeout > 0) { - DEBUGP - ("ip_ct_ras: set RAS connection timeout to %u seconds\n", - info->timeout); - ip_ct_refresh(ct, *pskb, info->timeout * HZ); - - /* Set expect timeout */ - read_lock_bh(&ip_conntrack_lock); - exp = find_expect(ct, ct->tuplehash[dir].tuple.dst.ip, - info->sig_port[!dir]); - if (exp) { - DEBUGP("ip_ct_ras: set Q.931 expect " - "(%u.%u.%u.%u:%hu->%u.%u.%u.%u:%hu) " - "timeout to %u seconds\n", - NIPQUAD(exp->tuple.src.ip), - ntohs(exp->tuple.src.u.tcp.port), - NIPQUAD(exp->tuple.dst.ip), - ntohs(exp->tuple.dst.u.tcp.port), - info->timeout); - set_expect_timeout(exp, info->timeout); - } - read_unlock_bh(&ip_conntrack_lock); - } - - return 0; -} - -/****************************************************************************/ -static int process_urq(struct sk_buff **pskb, struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - unsigned char **data, UnregistrationRequest * urq) -{ - struct ip_ct_h323_master *info = &ct->help.ct_h323_info; - int dir = CTINFO2DIR(ctinfo); - int ret; - typeof(set_sig_addr_hook) set_sig_addr; - - DEBUGP("ip_ct_ras: URQ\n"); - - set_sig_addr = rcu_dereference(set_sig_addr_hook); - if (set_sig_addr) { - ret = set_sig_addr(pskb, ct, ctinfo, data, - urq->callSignalAddress.item, - urq->callSignalAddress.count); - if (ret < 0) - return -1; - } - - /* Clear old expect */ - ip_ct_remove_expectations(ct); - info->sig_port[dir] = 0; - info->sig_port[!dir] = 0; - - /* Give it 30 seconds for UCF or URJ */ - ip_ct_refresh(ct, *pskb, 30 * HZ); - - return 0; -} - -/****************************************************************************/ -static int process_arq(struct sk_buff **pskb, struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - unsigned char **data, AdmissionRequest * arq) -{ - struct ip_ct_h323_master *info = &ct->help.ct_h323_info; - int dir = CTINFO2DIR(ctinfo); - __be32 ip; - u_int16_t port; - typeof(set_h225_addr_hook) set_h225_addr; - - DEBUGP("ip_ct_ras: ARQ\n"); - - set_h225_addr = rcu_dereference(set_h225_addr_hook); - if ((arq->options & eAdmissionRequest_destCallSignalAddress) && - get_h225_addr(*data, &arq->destCallSignalAddress, &ip, &port) && - ip == ct->tuplehash[dir].tuple.src.ip && - port == info->sig_port[dir] && set_h225_addr) { - /* Answering ARQ */ - return set_h225_addr(pskb, data, 0, - &arq->destCallSignalAddress, - ct->tuplehash[!dir].tuple.dst.ip, - info->sig_port[!dir]); - } - - if ((arq->options & eAdmissionRequest_srcCallSignalAddress) && - get_h225_addr(*data, &arq->srcCallSignalAddress, &ip, &port) && - ip == ct->tuplehash[dir].tuple.src.ip && set_h225_addr) { - /* Calling ARQ */ - return set_h225_addr(pskb, data, 0, - &arq->srcCallSignalAddress, - ct->tuplehash[!dir].tuple.dst.ip, - port); - } - - return 0; -} - -/****************************************************************************/ -static int process_acf(struct sk_buff **pskb, struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - unsigned char **data, AdmissionConfirm * acf) -{ - int dir = CTINFO2DIR(ctinfo); - int ret = 0; - __be32 ip; - u_int16_t port; - struct ip_conntrack_expect *exp; - typeof(set_sig_addr_hook) set_sig_addr; - - DEBUGP("ip_ct_ras: ACF\n"); - - if (!get_h225_addr(*data, &acf->destCallSignalAddress, &ip, &port)) - return 0; - - if (ip == ct->tuplehash[dir].tuple.dst.ip) { /* Answering ACF */ - set_sig_addr = rcu_dereference(set_sig_addr_hook); - if (set_sig_addr) - return set_sig_addr(pskb, ct, ctinfo, data, - &acf->destCallSignalAddress, 1); - return 0; - } - - /* Need new expect */ - if ((exp = ip_conntrack_expect_alloc(ct)) == NULL) - return -1; - exp->tuple.src.ip = ct->tuplehash[!dir].tuple.src.ip; - exp->tuple.src.u.tcp.port = 0; - exp->tuple.dst.ip = ip; - exp->tuple.dst.u.tcp.port = htons(port); - exp->tuple.dst.protonum = IPPROTO_TCP; - exp->mask.src.ip = htonl(0xFFFFFFFF); - exp->mask.src.u.tcp.port = 0; - exp->mask.dst.ip = htonl(0xFFFFFFFF); - exp->mask.dst.u.tcp.port = htons(0xFFFF); - exp->mask.dst.protonum = 0xFF; - exp->flags = IP_CT_EXPECT_PERMANENT; - exp->expectfn = ip_conntrack_q931_expect; - - if (ip_conntrack_expect_related(exp) == 0) { - DEBUGP("ip_ct_ras: expect Q.931 " - "%u.%u.%u.%u:%hu->%u.%u.%u.%u:%hu\n", - NIPQUAD(exp->tuple.src.ip), - ntohs(exp->tuple.src.u.tcp.port), - NIPQUAD(exp->tuple.dst.ip), - ntohs(exp->tuple.dst.u.tcp.port)); - } else - ret = -1; - - ip_conntrack_expect_put(exp); - - return ret; -} - -/****************************************************************************/ -static int process_lrq(struct sk_buff **pskb, struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - unsigned char **data, LocationRequest * lrq) -{ - typeof(set_ras_addr_hook) set_ras_addr; - - DEBUGP("ip_ct_ras: LRQ\n"); - - set_ras_addr = rcu_dereference(set_ras_addr_hook); - if (set_ras_addr) - return set_ras_addr(pskb, ct, ctinfo, data, - &lrq->replyAddress, 1); - return 0; -} - -/****************************************************************************/ -static int process_lcf(struct sk_buff **pskb, struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - unsigned char **data, LocationConfirm * lcf) -{ - int dir = CTINFO2DIR(ctinfo); - int ret = 0; - __be32 ip; - u_int16_t port; - struct ip_conntrack_expect *exp = NULL; - - DEBUGP("ip_ct_ras: LCF\n"); - - if (!get_h225_addr(*data, &lcf->callSignalAddress, &ip, &port)) - return 0; - - /* Need new expect for call signal */ - if ((exp = ip_conntrack_expect_alloc(ct)) == NULL) - return -1; - exp->tuple.src.ip = ct->tuplehash[!dir].tuple.src.ip; - exp->tuple.src.u.tcp.port = 0; - exp->tuple.dst.ip = ip; - exp->tuple.dst.u.tcp.port = htons(port); - exp->tuple.dst.protonum = IPPROTO_TCP; - exp->mask.src.ip = htonl(0xFFFFFFFF); - exp->mask.src.u.tcp.port = 0; - exp->mask.dst.ip = htonl(0xFFFFFFFF); - exp->mask.dst.u.tcp.port = htons(0xFFFF); - exp->mask.dst.protonum = 0xFF; - exp->flags = IP_CT_EXPECT_PERMANENT; - exp->expectfn = ip_conntrack_q931_expect; - - if (ip_conntrack_expect_related(exp) == 0) { - DEBUGP("ip_ct_ras: expect Q.931 " - "%u.%u.%u.%u:%hu->%u.%u.%u.%u:%hu\n", - NIPQUAD(exp->tuple.src.ip), - ntohs(exp->tuple.src.u.tcp.port), - NIPQUAD(exp->tuple.dst.ip), - ntohs(exp->tuple.dst.u.tcp.port)); - } else - ret = -1; - - ip_conntrack_expect_put(exp); - - /* Ignore rasAddress */ - - return ret; -} - -/****************************************************************************/ -static int process_irr(struct sk_buff **pskb, struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - unsigned char **data, InfoRequestResponse * irr) -{ - int ret; - typeof(set_ras_addr_hook) set_ras_addr; - typeof(set_sig_addr_hook) set_sig_addr; - - DEBUGP("ip_ct_ras: IRR\n"); - - set_ras_addr = rcu_dereference(set_ras_addr_hook); - if (set_ras_addr) { - ret = set_ras_addr(pskb, ct, ctinfo, data, - &irr->rasAddress, 1); - if (ret < 0) - return -1; - } - - set_sig_addr = rcu_dereference(set_sig_addr_hook); - if (set_sig_addr) { - ret = set_sig_addr(pskb, ct, ctinfo, data, - irr->callSignalAddress.item, - irr->callSignalAddress.count); - if (ret < 0) - return -1; - } - - return 0; -} - -/****************************************************************************/ -static int process_ras(struct sk_buff **pskb, struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - unsigned char **data, RasMessage * ras) -{ - switch (ras->choice) { - case eRasMessage_gatekeeperRequest: - return process_grq(pskb, ct, ctinfo, data, - &ras->gatekeeperRequest); - case eRasMessage_gatekeeperConfirm: - return process_gcf(pskb, ct, ctinfo, data, - &ras->gatekeeperConfirm); - case eRasMessage_registrationRequest: - return process_rrq(pskb, ct, ctinfo, data, - &ras->registrationRequest); - case eRasMessage_registrationConfirm: - return process_rcf(pskb, ct, ctinfo, data, - &ras->registrationConfirm); - case eRasMessage_unregistrationRequest: - return process_urq(pskb, ct, ctinfo, data, - &ras->unregistrationRequest); - case eRasMessage_admissionRequest: - return process_arq(pskb, ct, ctinfo, data, - &ras->admissionRequest); - case eRasMessage_admissionConfirm: - return process_acf(pskb, ct, ctinfo, data, - &ras->admissionConfirm); - case eRasMessage_locationRequest: - return process_lrq(pskb, ct, ctinfo, data, - &ras->locationRequest); - case eRasMessage_locationConfirm: - return process_lcf(pskb, ct, ctinfo, data, - &ras->locationConfirm); - case eRasMessage_infoRequestResponse: - return process_irr(pskb, ct, ctinfo, data, - &ras->infoRequestResponse); - default: - DEBUGP("ip_ct_ras: RAS message %d\n", ras->choice); - break; - } - - return 0; -} - -/****************************************************************************/ -static int ras_help(struct sk_buff **pskb, struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo) -{ - static RasMessage ras; - unsigned char *data; - int datalen = 0; - int ret; - - DEBUGP("ip_ct_ras: skblen = %u\n", (*pskb)->len); - - spin_lock_bh(&ip_h323_lock); - - /* Get UDP data */ - data = get_udp_data(pskb, &datalen); - if (data == NULL) - goto accept; - DEBUGP("ip_ct_ras: RAS message %u.%u.%u.%u->%u.%u.%u.%u, len=%d\n", - NIPQUAD((*pskb)->nh.iph->saddr), - NIPQUAD((*pskb)->nh.iph->daddr), datalen); - - /* Decode RAS message */ - ret = DecodeRasMessage(data, datalen, &ras); - if (ret < 0) { - if (net_ratelimit()) - printk("ip_ct_ras: decoding error: %s\n", - ret == H323_ERROR_BOUND ? - "out of bound" : "out of range"); - goto accept; - } - - /* Process RAS message */ - if (process_ras(pskb, ct, ctinfo, &data, &ras) < 0) - goto drop; - - accept: - spin_unlock_bh(&ip_h323_lock); - return NF_ACCEPT; - - drop: - spin_unlock_bh(&ip_h323_lock); - if (net_ratelimit()) - printk("ip_ct_ras: packet dropped\n"); - return NF_DROP; -} - -/****************************************************************************/ -static struct ip_conntrack_helper ip_conntrack_helper_ras = { - .name = "RAS", - .me = THIS_MODULE, - .max_expected = 32, - .timeout = 240, - .tuple = {.src = {.u = {.tcp = {.port = __constant_htons(RAS_PORT)}}}, - .dst = {.protonum = IPPROTO_UDP}}, - .mask = {.src = {.u = {0xFFFE}}, - .dst = {.protonum = 0xFF}}, - .help = ras_help, -}; - -/****************************************************************************/ -static void ip_conntrack_ras_expect(struct ip_conntrack *new, - struct ip_conntrack_expect *this) -{ - write_lock_bh(&ip_conntrack_lock); - new->helper = &ip_conntrack_helper_ras; - write_unlock_bh(&ip_conntrack_lock); -} - -/****************************************************************************/ -/* Not __exit - called from init() */ -static void fini(void) -{ - ip_conntrack_helper_unregister(&ip_conntrack_helper_ras); - ip_conntrack_helper_unregister(&ip_conntrack_helper_q931); - kfree(h323_buffer); - DEBUGP("ip_ct_h323: fini\n"); -} - -/****************************************************************************/ -static int __init init(void) -{ - int ret; - - h323_buffer = kmalloc(65536, GFP_KERNEL); - if (!h323_buffer) - return -ENOMEM; - if ((ret = ip_conntrack_helper_register(&ip_conntrack_helper_q931)) || - (ret = ip_conntrack_helper_register(&ip_conntrack_helper_ras))) { - fini(); - return ret; - } - DEBUGP("ip_ct_h323: init success\n"); - return 0; -} - -/****************************************************************************/ -module_init(init); -module_exit(fini); - -EXPORT_SYMBOL_GPL(get_h225_addr); -EXPORT_SYMBOL_GPL(ip_conntrack_h245_expect); -EXPORT_SYMBOL_GPL(ip_conntrack_q931_expect); -EXPORT_SYMBOL_GPL(set_h245_addr_hook); -EXPORT_SYMBOL_GPL(set_h225_addr_hook); -EXPORT_SYMBOL_GPL(set_sig_addr_hook); -EXPORT_SYMBOL_GPL(set_ras_addr_hook); -EXPORT_SYMBOL_GPL(nat_rtp_rtcp_hook); -EXPORT_SYMBOL_GPL(nat_t120_hook); -EXPORT_SYMBOL_GPL(nat_h245_hook); -EXPORT_SYMBOL_GPL(nat_callforwarding_hook); -EXPORT_SYMBOL_GPL(nat_q931_hook); - -MODULE_AUTHOR("Jing Min Zhao "); -MODULE_DESCRIPTION("H.323 connection tracking helper"); -MODULE_LICENSE("GPL"); diff -puN net/ipv4/netfilter/ip_conntrack_helper_pptp.c~git-net /dev/null --- a/net/ipv4/netfilter/ip_conntrack_helper_pptp.c +++ /dev/null @@ -1,684 +0,0 @@ -/* - * ip_conntrack_pptp.c - Version 3.0 - * - * Connection tracking support for PPTP (Point to Point Tunneling Protocol). - * PPTP is a a protocol for creating virtual private networks. - * It is a specification defined by Microsoft and some vendors - * working with Microsoft. PPTP is built on top of a modified - * version of the Internet Generic Routing Encapsulation Protocol. - * GRE is defined in RFC 1701 and RFC 1702. Documentation of - * PPTP can be found in RFC 2637 - * - * (C) 2000-2005 by Harald Welte - * - * Development of this code funded by Astaro AG (http://www.astaro.com/) - * - * Limitations: - * - We blindly assume that control connections are always - * established in PNS->PAC direction. This is a violation - * of RFFC2673 - * - We can only support one single call within each session - * - * TODO: - * - testing of incoming PPTP calls - * - * Changes: - * 2002-02-05 - Version 1.3 - * - Call ip_conntrack_unexpect_related() from - * pptp_destroy_siblings() to destroy expectations in case - * CALL_DISCONNECT_NOTIFY or tcp fin packet was seen - * (Philip Craig ) - * - Add Version information at module loadtime - * 2002-02-10 - Version 1.6 - * - move to C99 style initializers - * - remove second expectation if first arrives - * 2004-10-22 - Version 2.0 - * - merge Mandrake's 2.6.x port with recent 2.6.x API changes - * - fix lots of linear skb assumptions from Mandrake's port - * 2005-06-10 - Version 2.1 - * - use ip_conntrack_expect_free() instead of kfree() on the - * expect's (which are from the slab for quite some time) - * 2005-06-10 - Version 3.0 - * - port helper to post-2.6.11 API changes, - * funded by Oxcoda NetBox Blue (http://www.netboxblue.com/) - * 2005-07-30 - Version 3.1 - * - port helper to 2.6.13 API changes - * - */ - -#include -#include -#include -#include -#include - -#include -#include -#include -#include -#include - -#define IP_CT_PPTP_VERSION "3.1" - -MODULE_LICENSE("GPL"); -MODULE_AUTHOR("Harald Welte "); -MODULE_DESCRIPTION("Netfilter connection tracking helper module for PPTP"); - -static DEFINE_SPINLOCK(ip_pptp_lock); - -int -(*ip_nat_pptp_hook_outbound)(struct sk_buff **pskb, - struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - struct PptpControlHeader *ctlh, - union pptp_ctrl_union *pptpReq); - -int -(*ip_nat_pptp_hook_inbound)(struct sk_buff **pskb, - struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - struct PptpControlHeader *ctlh, - union pptp_ctrl_union *pptpReq); - -void -(*ip_nat_pptp_hook_exp_gre)(struct ip_conntrack_expect *expect_orig, - struct ip_conntrack_expect *expect_reply); - -void -(*ip_nat_pptp_hook_expectfn)(struct ip_conntrack *ct, - struct ip_conntrack_expect *exp); - -#if 0 -/* PptpControlMessageType names */ -const char *pptp_msg_name[] = { - "UNKNOWN_MESSAGE", - "START_SESSION_REQUEST", - "START_SESSION_REPLY", - "STOP_SESSION_REQUEST", - "STOP_SESSION_REPLY", - "ECHO_REQUEST", - "ECHO_REPLY", - "OUT_CALL_REQUEST", - "OUT_CALL_REPLY", - "IN_CALL_REQUEST", - "IN_CALL_REPLY", - "IN_CALL_CONNECT", - "CALL_CLEAR_REQUEST", - "CALL_DISCONNECT_NOTIFY", - "WAN_ERROR_NOTIFY", - "SET_LINK_INFO" -}; -EXPORT_SYMBOL(pptp_msg_name); -#define DEBUGP(format, args...) printk(KERN_DEBUG "%s:%s: " format, __FILE__, __FUNCTION__, ## args) -#else -#define DEBUGP(format, args...) -#endif - -#define SECS *HZ -#define MINS * 60 SECS -#define HOURS * 60 MINS - -#define PPTP_GRE_TIMEOUT (10 MINS) -#define PPTP_GRE_STREAM_TIMEOUT (5 HOURS) - -static void pptp_expectfn(struct ip_conntrack *ct, - struct ip_conntrack_expect *exp) -{ - typeof(ip_nat_pptp_hook_expectfn) ip_nat_pptp_expectfn; - - DEBUGP("increasing timeouts\n"); - - /* increase timeout of GRE data channel conntrack entry */ - ct->proto.gre.timeout = PPTP_GRE_TIMEOUT; - ct->proto.gre.stream_timeout = PPTP_GRE_STREAM_TIMEOUT; - - /* Can you see how rusty this code is, compared with the pre-2.6.11 - * one? That's what happened to my shiny newnat of 2002 ;( -HW */ - - rcu_read_lock(); - ip_nat_pptp_expectfn = rcu_dereference(ip_nat_pptp_hook_expectfn); - if (!ip_nat_pptp_expectfn) { - struct ip_conntrack_tuple inv_t; - struct ip_conntrack_expect *exp_other; - - /* obviously this tuple inversion only works until you do NAT */ - invert_tuplepr(&inv_t, &exp->tuple); - DEBUGP("trying to unexpect other dir: "); - DUMP_TUPLE(&inv_t); - - exp_other = ip_conntrack_expect_find_get(&inv_t); - if (exp_other) { - /* delete other expectation. */ - DEBUGP("found\n"); - ip_conntrack_unexpect_related(exp_other); - ip_conntrack_expect_put(exp_other); - } else { - DEBUGP("not found\n"); - } - } else { - /* we need more than simple inversion */ - ip_nat_pptp_expectfn(ct, exp); - } - rcu_read_unlock(); -} - -static int destroy_sibling_or_exp(const struct ip_conntrack_tuple *t) -{ - struct ip_conntrack_tuple_hash *h; - struct ip_conntrack_expect *exp; - - DEBUGP("trying to timeout ct or exp for tuple "); - DUMP_TUPLE(t); - - h = ip_conntrack_find_get(t, NULL); - if (h) { - struct ip_conntrack *sibling = tuplehash_to_ctrack(h); - DEBUGP("setting timeout of conntrack %p to 0\n", sibling); - sibling->proto.gre.timeout = 0; - sibling->proto.gre.stream_timeout = 0; - if (del_timer(&sibling->timeout)) - sibling->timeout.function((unsigned long)sibling); - ip_conntrack_put(sibling); - return 1; - } else { - exp = ip_conntrack_expect_find_get(t); - if (exp) { - DEBUGP("unexpect_related of expect %p\n", exp); - ip_conntrack_unexpect_related(exp); - ip_conntrack_expect_put(exp); - return 1; - } - } - - return 0; -} - - -/* timeout GRE data connections */ -static void pptp_destroy_siblings(struct ip_conntrack *ct) -{ - struct ip_conntrack_tuple t; - - ip_ct_gre_keymap_destroy(ct); - /* Since ct->sibling_list has literally rusted away in 2.6.11, - * we now need another way to find out about our sibling - * contrack and expects... -HW */ - - /* try original (pns->pac) tuple */ - memcpy(&t, &ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple, sizeof(t)); - t.dst.protonum = IPPROTO_GRE; - t.src.u.gre.key = ct->help.ct_pptp_info.pns_call_id; - t.dst.u.gre.key = ct->help.ct_pptp_info.pac_call_id; - - if (!destroy_sibling_or_exp(&t)) - DEBUGP("failed to timeout original pns->pac ct/exp\n"); - - /* try reply (pac->pns) tuple */ - memcpy(&t, &ct->tuplehash[IP_CT_DIR_REPLY].tuple, sizeof(t)); - t.dst.protonum = IPPROTO_GRE; - t.src.u.gre.key = ct->help.ct_pptp_info.pac_call_id; - t.dst.u.gre.key = ct->help.ct_pptp_info.pns_call_id; - - if (!destroy_sibling_or_exp(&t)) - DEBUGP("failed to timeout reply pac->pns ct/exp\n"); -} - -/* expect GRE connections (PNS->PAC and PAC->PNS direction) */ -static inline int -exp_gre(struct ip_conntrack *ct, - __be16 callid, - __be16 peer_callid) -{ - struct ip_conntrack_expect *exp_orig, *exp_reply; - int ret = 1; - typeof(ip_nat_pptp_hook_exp_gre) ip_nat_pptp_exp_gre; - - exp_orig = ip_conntrack_expect_alloc(ct); - if (exp_orig == NULL) - goto out; - - exp_reply = ip_conntrack_expect_alloc(ct); - if (exp_reply == NULL) - goto out_put_orig; - - /* original direction, PNS->PAC */ - exp_orig->tuple.src.ip = ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.src.ip; - exp_orig->tuple.src.u.gre.key = peer_callid; - exp_orig->tuple.dst.ip = ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.dst.ip; - exp_orig->tuple.dst.u.gre.key = callid; - exp_orig->tuple.dst.protonum = IPPROTO_GRE; - - exp_orig->mask.src.ip = htonl(0xffffffff); - exp_orig->mask.src.u.all = 0; - exp_orig->mask.dst.u.gre.key = htons(0xffff); - exp_orig->mask.dst.ip = htonl(0xffffffff); - exp_orig->mask.dst.protonum = 0xff; - - exp_orig->master = ct; - exp_orig->expectfn = pptp_expectfn; - exp_orig->flags = 0; - - /* both expectations are identical apart from tuple */ - memcpy(exp_reply, exp_orig, sizeof(*exp_reply)); - - /* reply direction, PAC->PNS */ - exp_reply->tuple.src.ip = ct->tuplehash[IP_CT_DIR_REPLY].tuple.src.ip; - exp_reply->tuple.src.u.gre.key = callid; - exp_reply->tuple.dst.ip = ct->tuplehash[IP_CT_DIR_REPLY].tuple.dst.ip; - exp_reply->tuple.dst.u.gre.key = peer_callid; - exp_reply->tuple.dst.protonum = IPPROTO_GRE; - - ip_nat_pptp_exp_gre = rcu_dereference(ip_nat_pptp_hook_exp_gre); - if (ip_nat_pptp_exp_gre) - ip_nat_pptp_exp_gre(exp_orig, exp_reply); - if (ip_conntrack_expect_related(exp_orig) != 0) - goto out_put_both; - if (ip_conntrack_expect_related(exp_reply) != 0) - goto out_unexpect_orig; - - /* Add GRE keymap entries */ - if (ip_ct_gre_keymap_add(ct, &exp_orig->tuple, 0) != 0) - goto out_unexpect_both; - if (ip_ct_gre_keymap_add(ct, &exp_reply->tuple, 1) != 0) { - ip_ct_gre_keymap_destroy(ct); - goto out_unexpect_both; - } - ret = 0; - -out_put_both: - ip_conntrack_expect_put(exp_reply); -out_put_orig: - ip_conntrack_expect_put(exp_orig); -out: - return ret; - -out_unexpect_both: - ip_conntrack_unexpect_related(exp_reply); -out_unexpect_orig: - ip_conntrack_unexpect_related(exp_orig); - goto out_put_both; -} - -static inline int -pptp_inbound_pkt(struct sk_buff **pskb, - struct PptpControlHeader *ctlh, - union pptp_ctrl_union *pptpReq, - unsigned int reqlen, - struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo) -{ - struct ip_ct_pptp_master *info = &ct->help.ct_pptp_info; - u_int16_t msg; - __be16 cid = 0, pcid = 0; - typeof(ip_nat_pptp_hook_inbound) ip_nat_pptp_inbound; - - msg = ntohs(ctlh->messageType); - DEBUGP("inbound control message %s\n", pptp_msg_name[msg]); - - switch (msg) { - case PPTP_START_SESSION_REPLY: - /* server confirms new control session */ - if (info->sstate < PPTP_SESSION_REQUESTED) - goto invalid; - if (pptpReq->srep.resultCode == PPTP_START_OK) - info->sstate = PPTP_SESSION_CONFIRMED; - else - info->sstate = PPTP_SESSION_ERROR; - break; - - case PPTP_STOP_SESSION_REPLY: - /* server confirms end of control session */ - if (info->sstate > PPTP_SESSION_STOPREQ) - goto invalid; - if (pptpReq->strep.resultCode == PPTP_STOP_OK) - info->sstate = PPTP_SESSION_NONE; - else - info->sstate = PPTP_SESSION_ERROR; - break; - - case PPTP_OUT_CALL_REPLY: - /* server accepted call, we now expect GRE frames */ - if (info->sstate != PPTP_SESSION_CONFIRMED) - goto invalid; - if (info->cstate != PPTP_CALL_OUT_REQ && - info->cstate != PPTP_CALL_OUT_CONF) - goto invalid; - - cid = pptpReq->ocack.callID; - pcid = pptpReq->ocack.peersCallID; - if (info->pns_call_id != pcid) - goto invalid; - DEBUGP("%s, CID=%X, PCID=%X\n", pptp_msg_name[msg], - ntohs(cid), ntohs(pcid)); - - if (pptpReq->ocack.resultCode == PPTP_OUTCALL_CONNECT) { - info->cstate = PPTP_CALL_OUT_CONF; - info->pac_call_id = cid; - exp_gre(ct, cid, pcid); - } else - info->cstate = PPTP_CALL_NONE; - break; - - case PPTP_IN_CALL_REQUEST: - /* server tells us about incoming call request */ - if (info->sstate != PPTP_SESSION_CONFIRMED) - goto invalid; - - cid = pptpReq->icreq.callID; - DEBUGP("%s, CID=%X\n", pptp_msg_name[msg], ntohs(cid)); - info->cstate = PPTP_CALL_IN_REQ; - info->pac_call_id = cid; - break; - - case PPTP_IN_CALL_CONNECT: - /* server tells us about incoming call established */ - if (info->sstate != PPTP_SESSION_CONFIRMED) - goto invalid; - if (info->cstate != PPTP_CALL_IN_REP && - info->cstate != PPTP_CALL_IN_CONF) - goto invalid; - - pcid = pptpReq->iccon.peersCallID; - cid = info->pac_call_id; - - if (info->pns_call_id != pcid) - goto invalid; - - DEBUGP("%s, PCID=%X\n", pptp_msg_name[msg], ntohs(pcid)); - info->cstate = PPTP_CALL_IN_CONF; - - /* we expect a GRE connection from PAC to PNS */ - exp_gre(ct, cid, pcid); - break; - - case PPTP_CALL_DISCONNECT_NOTIFY: - /* server confirms disconnect */ - cid = pptpReq->disc.callID; - DEBUGP("%s, CID=%X\n", pptp_msg_name[msg], ntohs(cid)); - info->cstate = PPTP_CALL_NONE; - - /* untrack this call id, unexpect GRE packets */ - pptp_destroy_siblings(ct); - break; - - case PPTP_WAN_ERROR_NOTIFY: - case PPTP_ECHO_REQUEST: - case PPTP_ECHO_REPLY: - /* I don't have to explain these ;) */ - break; - default: - goto invalid; - } - - ip_nat_pptp_inbound = rcu_dereference(ip_nat_pptp_hook_inbound); - if (ip_nat_pptp_inbound) - return ip_nat_pptp_inbound(pskb, ct, ctinfo, ctlh, pptpReq); - return NF_ACCEPT; - -invalid: - DEBUGP("invalid %s: type=%d cid=%u pcid=%u " - "cstate=%d sstate=%d pns_cid=%u pac_cid=%u\n", - msg <= PPTP_MSG_MAX ? pptp_msg_name[msg] : pptp_msg_name[0], - msg, ntohs(cid), ntohs(pcid), info->cstate, info->sstate, - ntohs(info->pns_call_id), ntohs(info->pac_call_id)); - return NF_ACCEPT; -} - -static inline int -pptp_outbound_pkt(struct sk_buff **pskb, - struct PptpControlHeader *ctlh, - union pptp_ctrl_union *pptpReq, - unsigned int reqlen, - struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo) -{ - struct ip_ct_pptp_master *info = &ct->help.ct_pptp_info; - u_int16_t msg; - __be16 cid = 0, pcid = 0; - typeof(ip_nat_pptp_hook_outbound) ip_nat_pptp_outbound; - - msg = ntohs(ctlh->messageType); - DEBUGP("outbound control message %s\n", pptp_msg_name[msg]); - - switch (msg) { - case PPTP_START_SESSION_REQUEST: - /* client requests for new control session */ - if (info->sstate != PPTP_SESSION_NONE) - goto invalid; - info->sstate = PPTP_SESSION_REQUESTED; - break; - case PPTP_STOP_SESSION_REQUEST: - /* client requests end of control session */ - info->sstate = PPTP_SESSION_STOPREQ; - break; - - case PPTP_OUT_CALL_REQUEST: - /* client initiating connection to server */ - if (info->sstate != PPTP_SESSION_CONFIRMED) - goto invalid; - info->cstate = PPTP_CALL_OUT_REQ; - /* track PNS call id */ - cid = pptpReq->ocreq.callID; - DEBUGP("%s, CID=%X\n", pptp_msg_name[msg], ntohs(cid)); - info->pns_call_id = cid; - break; - case PPTP_IN_CALL_REPLY: - /* client answers incoming call */ - if (info->cstate != PPTP_CALL_IN_REQ && - info->cstate != PPTP_CALL_IN_REP) - goto invalid; - - cid = pptpReq->icack.callID; - pcid = pptpReq->icack.peersCallID; - if (info->pac_call_id != pcid) - goto invalid; - DEBUGP("%s, CID=%X PCID=%X\n", pptp_msg_name[msg], - ntohs(cid), ntohs(pcid)); - - if (pptpReq->icack.resultCode == PPTP_INCALL_ACCEPT) { - /* part two of the three-way handshake */ - info->cstate = PPTP_CALL_IN_REP; - info->pns_call_id = cid; - } else - info->cstate = PPTP_CALL_NONE; - break; - - case PPTP_CALL_CLEAR_REQUEST: - /* client requests hangup of call */ - if (info->sstate != PPTP_SESSION_CONFIRMED) - goto invalid; - /* FUTURE: iterate over all calls and check if - * call ID is valid. We don't do this without newnat, - * because we only know about last call */ - info->cstate = PPTP_CALL_CLEAR_REQ; - break; - case PPTP_SET_LINK_INFO: - case PPTP_ECHO_REQUEST: - case PPTP_ECHO_REPLY: - /* I don't have to explain these ;) */ - break; - default: - goto invalid; - } - - ip_nat_pptp_outbound = rcu_dereference(ip_nat_pptp_hook_outbound); - if (ip_nat_pptp_outbound) - return ip_nat_pptp_outbound(pskb, ct, ctinfo, ctlh, pptpReq); - return NF_ACCEPT; - -invalid: - DEBUGP("invalid %s: type=%d cid=%u pcid=%u " - "cstate=%d sstate=%d pns_cid=%u pac_cid=%u\n", - msg <= PPTP_MSG_MAX ? pptp_msg_name[msg] : pptp_msg_name[0], - msg, ntohs(cid), ntohs(pcid), info->cstate, info->sstate, - ntohs(info->pns_call_id), ntohs(info->pac_call_id)); - return NF_ACCEPT; -} - -static const unsigned int pptp_msg_size[] = { - [PPTP_START_SESSION_REQUEST] = sizeof(struct PptpStartSessionRequest), - [PPTP_START_SESSION_REPLY] = sizeof(struct PptpStartSessionReply), - [PPTP_STOP_SESSION_REQUEST] = sizeof(struct PptpStopSessionRequest), - [PPTP_STOP_SESSION_REPLY] = sizeof(struct PptpStopSessionReply), - [PPTP_OUT_CALL_REQUEST] = sizeof(struct PptpOutCallRequest), - [PPTP_OUT_CALL_REPLY] = sizeof(struct PptpOutCallReply), - [PPTP_IN_CALL_REQUEST] = sizeof(struct PptpInCallRequest), - [PPTP_IN_CALL_REPLY] = sizeof(struct PptpInCallReply), - [PPTP_IN_CALL_CONNECT] = sizeof(struct PptpInCallConnected), - [PPTP_CALL_CLEAR_REQUEST] = sizeof(struct PptpClearCallRequest), - [PPTP_CALL_DISCONNECT_NOTIFY] = sizeof(struct PptpCallDisconnectNotify), - [PPTP_WAN_ERROR_NOTIFY] = sizeof(struct PptpWanErrorNotify), - [PPTP_SET_LINK_INFO] = sizeof(struct PptpSetLinkInfo), -}; - -/* track caller id inside control connection, call expect_related */ -static int -conntrack_pptp_help(struct sk_buff **pskb, - struct ip_conntrack *ct, enum ip_conntrack_info ctinfo) - -{ - int dir = CTINFO2DIR(ctinfo); - struct ip_ct_pptp_master *info = &ct->help.ct_pptp_info; - struct tcphdr _tcph, *tcph; - struct pptp_pkt_hdr _pptph, *pptph; - struct PptpControlHeader _ctlh, *ctlh; - union pptp_ctrl_union _pptpReq, *pptpReq; - unsigned int tcplen = (*pskb)->len - (*pskb)->nh.iph->ihl * 4; - unsigned int datalen, reqlen, nexthdr_off; - int oldsstate, oldcstate; - int ret; - u_int16_t msg; - - /* don't do any tracking before tcp handshake complete */ - if (ctinfo != IP_CT_ESTABLISHED - && ctinfo != IP_CT_ESTABLISHED+IP_CT_IS_REPLY) { - DEBUGP("ctinfo = %u, skipping\n", ctinfo); - return NF_ACCEPT; - } - - nexthdr_off = (*pskb)->nh.iph->ihl*4; - tcph = skb_header_pointer(*pskb, nexthdr_off, sizeof(_tcph), &_tcph); - BUG_ON(!tcph); - nexthdr_off += tcph->doff * 4; - datalen = tcplen - tcph->doff * 4; - - pptph = skb_header_pointer(*pskb, nexthdr_off, sizeof(_pptph), &_pptph); - if (!pptph) { - DEBUGP("no full PPTP header, can't track\n"); - return NF_ACCEPT; - } - nexthdr_off += sizeof(_pptph); - datalen -= sizeof(_pptph); - - /* if it's not a control message we can't do anything with it */ - if (ntohs(pptph->packetType) != PPTP_PACKET_CONTROL || - ntohl(pptph->magicCookie) != PPTP_MAGIC_COOKIE) { - DEBUGP("not a control packet\n"); - return NF_ACCEPT; - } - - ctlh = skb_header_pointer(*pskb, nexthdr_off, sizeof(_ctlh), &_ctlh); - if (!ctlh) - return NF_ACCEPT; - nexthdr_off += sizeof(_ctlh); - datalen -= sizeof(_ctlh); - - reqlen = datalen; - msg = ntohs(ctlh->messageType); - if (msg > 0 && msg <= PPTP_MSG_MAX && reqlen < pptp_msg_size[msg]) - return NF_ACCEPT; - if (reqlen > sizeof(*pptpReq)) - reqlen = sizeof(*pptpReq); - - pptpReq = skb_header_pointer(*pskb, nexthdr_off, reqlen, &_pptpReq); - if (!pptpReq) - return NF_ACCEPT; - - oldsstate = info->sstate; - oldcstate = info->cstate; - - spin_lock_bh(&ip_pptp_lock); - - /* FIXME: We just blindly assume that the control connection is always - * established from PNS->PAC. However, RFC makes no guarantee */ - if (dir == IP_CT_DIR_ORIGINAL) - /* client -> server (PNS -> PAC) */ - ret = pptp_outbound_pkt(pskb, ctlh, pptpReq, reqlen, ct, - ctinfo); - else - /* server -> client (PAC -> PNS) */ - ret = pptp_inbound_pkt(pskb, ctlh, pptpReq, reqlen, ct, - ctinfo); - DEBUGP("sstate: %d->%d, cstate: %d->%d\n", - oldsstate, info->sstate, oldcstate, info->cstate); - spin_unlock_bh(&ip_pptp_lock); - - return ret; -} - -/* control protocol helper */ -static struct ip_conntrack_helper pptp = { - .list = { NULL, NULL }, - .name = "pptp", - .me = THIS_MODULE, - .max_expected = 2, - .timeout = 5 * 60, - .tuple = { .src = { .ip = 0, - .u = { .tcp = { .port = - __constant_htons(PPTP_CONTROL_PORT) } } - }, - .dst = { .ip = 0, - .u = { .all = 0 }, - .protonum = IPPROTO_TCP - } - }, - .mask = { .src = { .ip = 0, - .u = { .tcp = { .port = __constant_htons(0xffff) } } - }, - .dst = { .ip = 0, - .u = { .all = 0 }, - .protonum = 0xff - } - }, - .help = conntrack_pptp_help, - .destroy = pptp_destroy_siblings, -}; - -extern void ip_ct_proto_gre_fini(void); -extern int __init ip_ct_proto_gre_init(void); - -/* ip_conntrack_pptp initialization */ -static int __init ip_conntrack_helper_pptp_init(void) -{ - int retcode; - - retcode = ip_ct_proto_gre_init(); - if (retcode < 0) - return retcode; - - DEBUGP(" registering helper\n"); - if ((retcode = ip_conntrack_helper_register(&pptp))) { - printk(KERN_ERR "Unable to register conntrack application " - "helper for pptp: %d\n", retcode); - ip_ct_proto_gre_fini(); - return retcode; - } - - printk("ip_conntrack_pptp version %s loaded\n", IP_CT_PPTP_VERSION); - return 0; -} - -static void __exit ip_conntrack_helper_pptp_fini(void) -{ - ip_conntrack_helper_unregister(&pptp); - ip_ct_proto_gre_fini(); - printk("ip_conntrack_pptp version %s unloaded\n", IP_CT_PPTP_VERSION); -} - -module_init(ip_conntrack_helper_pptp_init); -module_exit(ip_conntrack_helper_pptp_fini); - -EXPORT_SYMBOL(ip_nat_pptp_hook_outbound); -EXPORT_SYMBOL(ip_nat_pptp_hook_inbound); -EXPORT_SYMBOL(ip_nat_pptp_hook_exp_gre); -EXPORT_SYMBOL(ip_nat_pptp_hook_expectfn); diff -puN net/ipv4/netfilter/ip_conntrack_irc.c~git-net /dev/null --- a/net/ipv4/netfilter/ip_conntrack_irc.c +++ /dev/null @@ -1,314 +0,0 @@ -/* IRC extension for IP connection tracking, Version 1.21 - * (C) 2000-2002 by Harald Welte - * based on RR's ip_conntrack_ftp.c - * - * ip_conntrack_irc.c,v 1.21 2002/02/05 14:49:26 laforge Exp - * - * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License - * as published by the Free Software Foundation; either version - * 2 of the License, or (at your option) any later version. - ** - * Module load syntax: - * insmod ip_conntrack_irc.o ports=port1,port2,...port - * max_dcc_channels=n dcc_timeout=secs - * - * please give the ports of all IRC servers You wish to connect to. - * If You don't specify ports, the default will be port 6667. - * With max_dcc_channels you can define the maximum number of not - * yet answered DCC channels per IRC session (default 8). - * With dcc_timeout you can specify how long the system waits for - * an expected DCC channel (default 300 seconds). - * - */ - -#include -#include -#include -#include -#include - -#include -#include -#include - -#define MAX_PORTS 8 -static unsigned short ports[MAX_PORTS]; -static int ports_c; -static unsigned int max_dcc_channels = 8; -static unsigned int dcc_timeout = 300; -/* This is slow, but it's simple. --RR */ -static char *irc_buffer; -static DEFINE_SPINLOCK(irc_buffer_lock); - -unsigned int (*ip_nat_irc_hook)(struct sk_buff **pskb, - enum ip_conntrack_info ctinfo, - unsigned int matchoff, - unsigned int matchlen, - struct ip_conntrack_expect *exp); -EXPORT_SYMBOL_GPL(ip_nat_irc_hook); - -MODULE_AUTHOR("Harald Welte "); -MODULE_DESCRIPTION("IRC (DCC) connection tracking helper"); -MODULE_LICENSE("GPL"); -module_param_array(ports, ushort, &ports_c, 0400); -MODULE_PARM_DESC(ports, "port numbers of IRC servers"); -module_param(max_dcc_channels, uint, 0400); -MODULE_PARM_DESC(max_dcc_channels, "max number of expected DCC channels per IRC session"); -module_param(dcc_timeout, uint, 0400); -MODULE_PARM_DESC(dcc_timeout, "timeout on for unestablished DCC channels"); - -static const char *dccprotos[] = { "SEND ", "CHAT ", "MOVE ", "TSEND ", "SCHAT " }; -#define MINMATCHLEN 5 - -#if 0 -#define DEBUGP(format, args...) printk(KERN_DEBUG "%s:%s:" format, \ - __FILE__, __FUNCTION__ , ## args) -#else -#define DEBUGP(format, args...) -#endif - -static int parse_dcc(char *data, char *data_end, u_int32_t *ip, - u_int16_t *port, char **ad_beg_p, char **ad_end_p) -/* tries to get the ip_addr and port out of a dcc command - return value: -1 on failure, 0 on success - data pointer to first byte of DCC command data - data_end pointer to last byte of dcc command data - ip returns parsed ip of dcc command - port returns parsed port of dcc command - ad_beg_p returns pointer to first byte of addr data - ad_end_p returns pointer to last byte of addr data */ -{ - - /* at least 12: "AAAAAAAA P\1\n" */ - while (*data++ != ' ') - if (data > data_end - 12) - return -1; - - *ad_beg_p = data; - *ip = simple_strtoul(data, &data, 10); - - /* skip blanks between ip and port */ - while (*data == ' ') { - if (data >= data_end) - return -1; - data++; - } - - *port = simple_strtoul(data, &data, 10); - *ad_end_p = data; - - return 0; -} - -static int help(struct sk_buff **pskb, - struct ip_conntrack *ct, enum ip_conntrack_info ctinfo) -{ - unsigned int dataoff; - struct tcphdr _tcph, *th; - char *data, *data_limit, *ib_ptr; - int dir = CTINFO2DIR(ctinfo); - struct ip_conntrack_expect *exp; - u32 seq; - u_int32_t dcc_ip; - u_int16_t dcc_port; - int i, ret = NF_ACCEPT; - char *addr_beg_p, *addr_end_p; - typeof(ip_nat_irc_hook) ip_nat_irc; - - DEBUGP("entered\n"); - - /* If packet is coming from IRC server */ - if (dir == IP_CT_DIR_REPLY) - return NF_ACCEPT; - - /* Until there's been traffic both ways, don't look in packets. */ - if (ctinfo != IP_CT_ESTABLISHED - && ctinfo != IP_CT_ESTABLISHED + IP_CT_IS_REPLY) { - DEBUGP("Conntrackinfo = %u\n", ctinfo); - return NF_ACCEPT; - } - - /* Not a full tcp header? */ - th = skb_header_pointer(*pskb, (*pskb)->nh.iph->ihl*4, - sizeof(_tcph), &_tcph); - if (th == NULL) - return NF_ACCEPT; - - /* No data? */ - dataoff = (*pskb)->nh.iph->ihl*4 + th->doff*4; - if (dataoff >= (*pskb)->len) - return NF_ACCEPT; - - spin_lock_bh(&irc_buffer_lock); - ib_ptr = skb_header_pointer(*pskb, dataoff, - (*pskb)->len - dataoff, irc_buffer); - BUG_ON(ib_ptr == NULL); - - data = ib_ptr; - data_limit = ib_ptr + (*pskb)->len - dataoff; - - /* strlen("\1DCC SENT t AAAAAAAA P\1\n")=24 - * 5+MINMATCHLEN+strlen("t AAAAAAAA P\1\n")=14 */ - while (data < (data_limit - (19 + MINMATCHLEN))) { - if (memcmp(data, "\1DCC ", 5)) { - data++; - continue; - } - - data += 5; - /* we have at least (19+MINMATCHLEN)-5 bytes valid data left */ - - DEBUGP("DCC found in master %u.%u.%u.%u:%u %u.%u.%u.%u:%u...\n", - NIPQUAD(iph->saddr), ntohs(th->source), - NIPQUAD(iph->daddr), ntohs(th->dest)); - - for (i = 0; i < ARRAY_SIZE(dccprotos); i++) { - if (memcmp(data, dccprotos[i], strlen(dccprotos[i]))) { - /* no match */ - continue; - } - - DEBUGP("DCC %s detected\n", dccprotos[i]); - data += strlen(dccprotos[i]); - /* we have at least - * (19+MINMATCHLEN)-5-dccprotos[i].matchlen bytes valid - * data left (== 14/13 bytes) */ - if (parse_dcc((char *)data, data_limit, &dcc_ip, - &dcc_port, &addr_beg_p, &addr_end_p)) { - /* unable to parse */ - DEBUGP("unable to parse dcc command\n"); - continue; - } - DEBUGP("DCC bound ip/port: %u.%u.%u.%u:%u\n", - HIPQUAD(dcc_ip), dcc_port); - - /* dcc_ip can be the internal OR external (NAT'ed) IP - * Tiago Sousa */ - if (ct->tuplehash[dir].tuple.src.ip != htonl(dcc_ip) - && ct->tuplehash[IP_CT_DIR_REPLY].tuple.dst.ip != htonl(dcc_ip)) { - if (net_ratelimit()) - printk(KERN_WARNING - "Forged DCC command from " - "%u.%u.%u.%u: %u.%u.%u.%u:%u\n", - NIPQUAD(ct->tuplehash[dir].tuple.src.ip), - HIPQUAD(dcc_ip), dcc_port); - - continue; - } - - exp = ip_conntrack_expect_alloc(ct); - if (exp == NULL) { - ret = NF_DROP; - goto out; - } - - /* save position of address in dcc string, - * necessary for NAT */ - DEBUGP("tcph->seq = %u\n", th->seq); - seq = ntohl(th->seq) + (addr_beg_p - ib_ptr); - - /* We refer to the reverse direction ("!dir") - * tuples here, because we're expecting - * something in the other * direction. - * Doesn't matter unless NAT is happening. */ - exp->tuple = ((struct ip_conntrack_tuple) - { { 0, { 0 } }, - { ct->tuplehash[!dir].tuple.dst.ip, - { .tcp = { htons(dcc_port) } }, - IPPROTO_TCP }}); - exp->mask = ((struct ip_conntrack_tuple) - { { 0, { 0 } }, - { htonl(0xFFFFFFFF), - { .tcp = { htons(0xFFFF) } }, 0xFF }}); - exp->expectfn = NULL; - exp->flags = 0; - ip_nat_irc = rcu_dereference(ip_nat_irc_hook); - if (ip_nat_irc) - ret = ip_nat_irc(pskb, ctinfo, - addr_beg_p - ib_ptr, - addr_end_p - addr_beg_p, - exp); - else if (ip_conntrack_expect_related(exp) != 0) - ret = NF_DROP; - ip_conntrack_expect_put(exp); - goto out; - } /* for .. NUM_DCCPROTO */ - } /* while data < ... */ - - out: - spin_unlock_bh(&irc_buffer_lock); - return ret; -} - -static struct ip_conntrack_helper irc_helpers[MAX_PORTS]; -static char irc_names[MAX_PORTS][sizeof("irc-65535")]; - -static void ip_conntrack_irc_fini(void); - -static int __init ip_conntrack_irc_init(void) -{ - int i, ret; - struct ip_conntrack_helper *hlpr; - char *tmpname; - - if (max_dcc_channels < 1) { - printk("ip_conntrack_irc: max_dcc_channels must be a positive integer\n"); - return -EBUSY; - } - - irc_buffer = kmalloc(65536, GFP_KERNEL); - if (!irc_buffer) - return -ENOMEM; - - /* If no port given, default to standard irc port */ - if (ports_c == 0) - ports[ports_c++] = IRC_PORT; - - for (i = 0; i < ports_c; i++) { - hlpr = &irc_helpers[i]; - hlpr->tuple.src.u.tcp.port = htons(ports[i]); - hlpr->tuple.dst.protonum = IPPROTO_TCP; - hlpr->mask.src.u.tcp.port = htons(0xFFFF); - hlpr->mask.dst.protonum = 0xFF; - hlpr->max_expected = max_dcc_channels; - hlpr->timeout = dcc_timeout; - hlpr->me = THIS_MODULE; - hlpr->help = help; - - tmpname = &irc_names[i][0]; - if (ports[i] == IRC_PORT) - sprintf(tmpname, "irc"); - else - sprintf(tmpname, "irc-%d", i); - hlpr->name = tmpname; - - DEBUGP("port #%d: %d\n", i, ports[i]); - - ret = ip_conntrack_helper_register(hlpr); - - if (ret) { - printk("ip_conntrack_irc: ERROR registering port %d\n", - ports[i]); - ip_conntrack_irc_fini(); - return -EBUSY; - } - } - return 0; -} - -/* This function is intentionally _NOT_ defined as __exit, because - * it is needed by the init function */ -static void ip_conntrack_irc_fini(void) -{ - int i; - for (i = 0; i < ports_c; i++) { - DEBUGP("unregistering port %d\n", - ports[i]); - ip_conntrack_helper_unregister(&irc_helpers[i]); - } - kfree(irc_buffer); -} - -module_init(ip_conntrack_irc_init); -module_exit(ip_conntrack_irc_fini); diff -puN net/ipv4/netfilter/ip_conntrack_netbios_ns.c~git-net /dev/null --- a/net/ipv4/netfilter/ip_conntrack_netbios_ns.c +++ /dev/null @@ -1,143 +0,0 @@ -/* - * NetBIOS name service broadcast connection tracking helper - * - * (c) 2005 Patrick McHardy - * - * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License - * as published by the Free Software Foundation; either version - * 2 of the License, or (at your option) any later version. - */ -/* - * This helper tracks locally originating NetBIOS name service - * requests by issuing permanent expectations (valid until - * timing out) matching all reply connections from the - * destination network. The only NetBIOS specific thing is - * actually the port number. - */ -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#include -#include -#include -#include - -#define NMBD_PORT 137 - -MODULE_AUTHOR("Patrick McHardy "); -MODULE_DESCRIPTION("NetBIOS name service broadcast connection tracking helper"); -MODULE_LICENSE("GPL"); - -static unsigned int timeout = 3; -module_param(timeout, uint, 0400); -MODULE_PARM_DESC(timeout, "timeout for master connection/replies in seconds"); - -static int help(struct sk_buff **pskb, - struct ip_conntrack *ct, enum ip_conntrack_info ctinfo) -{ - struct ip_conntrack_expect *exp; - struct iphdr *iph = (*pskb)->nh.iph; - struct rtable *rt = (struct rtable *)(*pskb)->dst; - struct in_device *in_dev; - __be32 mask = 0; - - /* we're only interested in locally generated packets */ - if ((*pskb)->sk == NULL) - goto out; - if (rt == NULL || !(rt->rt_flags & RTCF_BROADCAST)) - goto out; - if (CTINFO2DIR(ctinfo) != IP_CT_DIR_ORIGINAL) - goto out; - - rcu_read_lock(); - in_dev = __in_dev_get_rcu(rt->u.dst.dev); - if (in_dev != NULL) { - for_primary_ifa(in_dev) { - if (ifa->ifa_broadcast == iph->daddr) { - mask = ifa->ifa_mask; - break; - } - } endfor_ifa(in_dev); - } - rcu_read_unlock(); - - if (mask == 0) - goto out; - - exp = ip_conntrack_expect_alloc(ct); - if (exp == NULL) - goto out; - - exp->tuple = ct->tuplehash[IP_CT_DIR_REPLY].tuple; - exp->tuple.src.u.udp.port = htons(NMBD_PORT); - - exp->mask.src.ip = mask; - exp->mask.src.u.udp.port = htons(0xFFFF); - exp->mask.dst.ip = htonl(0xFFFFFFFF); - exp->mask.dst.u.udp.port = htons(0xFFFF); - exp->mask.dst.protonum = 0xFF; - - exp->expectfn = NULL; - exp->flags = IP_CT_EXPECT_PERMANENT; - - ip_conntrack_expect_related(exp); - ip_conntrack_expect_put(exp); - - ip_ct_refresh(ct, *pskb, timeout * HZ); -out: - return NF_ACCEPT; -} - -static struct ip_conntrack_helper helper = { - .name = "netbios-ns", - .tuple = { - .src = { - .u = { - .udp = { - .port = __constant_htons(NMBD_PORT), - } - } - }, - .dst = { - .protonum = IPPROTO_UDP, - }, - }, - .mask = { - .src = { - .u = { - .udp = { - .port = __constant_htons(0xFFFF), - } - } - }, - .dst = { - .protonum = 0xFF, - }, - }, - .max_expected = 1, - .me = THIS_MODULE, - .help = help, -}; - -static int __init ip_conntrack_netbios_ns_init(void) -{ - helper.timeout = timeout; - return ip_conntrack_helper_register(&helper); -} - -static void __exit ip_conntrack_netbios_ns_fini(void) -{ - ip_conntrack_helper_unregister(&helper); -} - -module_init(ip_conntrack_netbios_ns_init); -module_exit(ip_conntrack_netbios_ns_fini); diff -puN net/ipv4/netfilter/ip_conntrack_netlink.c~git-net /dev/null --- a/net/ipv4/netfilter/ip_conntrack_netlink.c +++ /dev/null @@ -1,1577 +0,0 @@ -/* Connection tracking via netlink socket. Allows for user space - * protocol helpers and general trouble making from userspace. - * - * (C) 2001 by Jay Schulist - * (C) 2002-2005 by Harald Welte - * (C) 2003 by Patrick Mchardy - * (C) 2005-2006 by Pablo Neira Ayuso - * - * I've reworked this stuff to use attributes instead of conntrack - * structures. 5.44 am. I need more tea. --pablo 05/07/11. - * - * Initial connection tracking via netlink development funded and - * generally made possible by Network Robots, Inc. (www.networkrobots.com) - * - * Further development of this code funded by Astaro AG (http://www.astaro.com) - * - * This software may be used and distributed according to the terms - * of the GNU General Public License, incorporated herein by reference. - */ - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#include -#include -#include -#include -#include -#include - -#include -#include - -MODULE_LICENSE("GPL"); - -static char __initdata version[] = "0.90"; - -static inline int -ctnetlink_dump_tuples_proto(struct sk_buff *skb, - const struct ip_conntrack_tuple *tuple, - struct ip_conntrack_protocol *proto) -{ - int ret = 0; - struct nfattr *nest_parms = NFA_NEST(skb, CTA_TUPLE_PROTO); - - NFA_PUT(skb, CTA_PROTO_NUM, sizeof(u_int8_t), &tuple->dst.protonum); - - if (likely(proto->tuple_to_nfattr)) - ret = proto->tuple_to_nfattr(skb, tuple); - - NFA_NEST_END(skb, nest_parms); - - return ret; - -nfattr_failure: - return -1; -} - -static inline int -ctnetlink_dump_tuples_ip(struct sk_buff *skb, - const struct ip_conntrack_tuple *tuple) -{ - struct nfattr *nest_parms = NFA_NEST(skb, CTA_TUPLE_IP); - - NFA_PUT(skb, CTA_IP_V4_SRC, sizeof(__be32), &tuple->src.ip); - NFA_PUT(skb, CTA_IP_V4_DST, sizeof(__be32), &tuple->dst.ip); - - NFA_NEST_END(skb, nest_parms); - - return 0; - -nfattr_failure: - return -1; -} - -static inline int -ctnetlink_dump_tuples(struct sk_buff *skb, - const struct ip_conntrack_tuple *tuple) -{ - int ret; - struct ip_conntrack_protocol *proto; - - ret = ctnetlink_dump_tuples_ip(skb, tuple); - if (unlikely(ret < 0)) - return ret; - - proto = ip_conntrack_proto_find_get(tuple->dst.protonum); - ret = ctnetlink_dump_tuples_proto(skb, tuple, proto); - ip_conntrack_proto_put(proto); - - return ret; -} - -static inline int -ctnetlink_dump_status(struct sk_buff *skb, const struct ip_conntrack *ct) -{ - __be32 status = htonl((u_int32_t) ct->status); - NFA_PUT(skb, CTA_STATUS, sizeof(status), &status); - return 0; - -nfattr_failure: - return -1; -} - -static inline int -ctnetlink_dump_timeout(struct sk_buff *skb, const struct ip_conntrack *ct) -{ - long timeout_l = ct->timeout.expires - jiffies; - __be32 timeout; - - if (timeout_l < 0) - timeout = 0; - else - timeout = htonl(timeout_l / HZ); - - NFA_PUT(skb, CTA_TIMEOUT, sizeof(timeout), &timeout); - return 0; - -nfattr_failure: - return -1; -} - -static inline int -ctnetlink_dump_protoinfo(struct sk_buff *skb, const struct ip_conntrack *ct) -{ - struct ip_conntrack_protocol *proto = ip_conntrack_proto_find_get(ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.dst.protonum); - - struct nfattr *nest_proto; - int ret; - - if (!proto->to_nfattr) { - ip_conntrack_proto_put(proto); - return 0; - } - - nest_proto = NFA_NEST(skb, CTA_PROTOINFO); - - ret = proto->to_nfattr(skb, nest_proto, ct); - - ip_conntrack_proto_put(proto); - - NFA_NEST_END(skb, nest_proto); - - return ret; - -nfattr_failure: - ip_conntrack_proto_put(proto); - return -1; -} - -static inline int -ctnetlink_dump_helpinfo(struct sk_buff *skb, const struct ip_conntrack *ct) -{ - struct nfattr *nest_helper; - - if (!ct->helper) - return 0; - - nest_helper = NFA_NEST(skb, CTA_HELP); - NFA_PUT(skb, CTA_HELP_NAME, strlen(ct->helper->name), ct->helper->name); - - if (ct->helper->to_nfattr) - ct->helper->to_nfattr(skb, ct); - - NFA_NEST_END(skb, nest_helper); - - return 0; - -nfattr_failure: - return -1; -} - -#ifdef CONFIG_IP_NF_CT_ACCT -static inline int -ctnetlink_dump_counters(struct sk_buff *skb, const struct ip_conntrack *ct, - enum ip_conntrack_dir dir) -{ - enum ctattr_type type = dir ? CTA_COUNTERS_REPLY: CTA_COUNTERS_ORIG; - struct nfattr *nest_count = NFA_NEST(skb, type); - __be32 tmp; - - tmp = htonl(ct->counters[dir].packets); - NFA_PUT(skb, CTA_COUNTERS32_PACKETS, sizeof(__be32), &tmp); - - tmp = htonl(ct->counters[dir].bytes); - NFA_PUT(skb, CTA_COUNTERS32_BYTES, sizeof(__be32), &tmp); - - NFA_NEST_END(skb, nest_count); - - return 0; - -nfattr_failure: - return -1; -} -#else -#define ctnetlink_dump_counters(a, b, c) (0) -#endif - -#ifdef CONFIG_IP_NF_CONNTRACK_MARK -static inline int -ctnetlink_dump_mark(struct sk_buff *skb, const struct ip_conntrack *ct) -{ - __be32 mark = htonl(ct->mark); - - NFA_PUT(skb, CTA_MARK, sizeof(__be32), &mark); - return 0; - -nfattr_failure: - return -1; -} -#else -#define ctnetlink_dump_mark(a, b) (0) -#endif - -static inline int -ctnetlink_dump_id(struct sk_buff *skb, const struct ip_conntrack *ct) -{ - __be32 id = htonl(ct->id); - NFA_PUT(skb, CTA_ID, sizeof(__be32), &id); - return 0; - -nfattr_failure: - return -1; -} - -static inline int -ctnetlink_dump_use(struct sk_buff *skb, const struct ip_conntrack *ct) -{ - __be32 use = htonl(atomic_read(&ct->ct_general.use)); - - NFA_PUT(skb, CTA_USE, sizeof(__be32), &use); - return 0; - -nfattr_failure: - return -1; -} - -#define tuple(ct, dir) (&(ct)->tuplehash[dir].tuple) - -static int -ctnetlink_fill_info(struct sk_buff *skb, u32 pid, u32 seq, - int event, int nowait, - const struct ip_conntrack *ct) -{ - struct nlmsghdr *nlh; - struct nfgenmsg *nfmsg; - struct nfattr *nest_parms; - unsigned char *b; - - b = skb->tail; - - event |= NFNL_SUBSYS_CTNETLINK << 8; - nlh = NLMSG_PUT(skb, pid, seq, event, sizeof(struct nfgenmsg)); - nfmsg = NLMSG_DATA(nlh); - - nlh->nlmsg_flags = (nowait && pid) ? NLM_F_MULTI : 0; - nfmsg->nfgen_family = AF_INET; - nfmsg->version = NFNETLINK_V0; - nfmsg->res_id = 0; - - nest_parms = NFA_NEST(skb, CTA_TUPLE_ORIG); - if (ctnetlink_dump_tuples(skb, tuple(ct, IP_CT_DIR_ORIGINAL)) < 0) - goto nfattr_failure; - NFA_NEST_END(skb, nest_parms); - - nest_parms = NFA_NEST(skb, CTA_TUPLE_REPLY); - if (ctnetlink_dump_tuples(skb, tuple(ct, IP_CT_DIR_REPLY)) < 0) - goto nfattr_failure; - NFA_NEST_END(skb, nest_parms); - - if (ctnetlink_dump_status(skb, ct) < 0 || - ctnetlink_dump_timeout(skb, ct) < 0 || - ctnetlink_dump_counters(skb, ct, IP_CT_DIR_ORIGINAL) < 0 || - ctnetlink_dump_counters(skb, ct, IP_CT_DIR_REPLY) < 0 || - ctnetlink_dump_protoinfo(skb, ct) < 0 || - ctnetlink_dump_helpinfo(skb, ct) < 0 || - ctnetlink_dump_mark(skb, ct) < 0 || - ctnetlink_dump_id(skb, ct) < 0 || - ctnetlink_dump_use(skb, ct) < 0) - goto nfattr_failure; - - nlh->nlmsg_len = skb->tail - b; - return skb->len; - -nlmsg_failure: -nfattr_failure: - skb_trim(skb, b - skb->data); - return -1; -} - -#ifdef CONFIG_IP_NF_CONNTRACK_EVENTS -static int ctnetlink_conntrack_event(struct notifier_block *this, - unsigned long events, void *ptr) -{ - struct nlmsghdr *nlh; - struct nfgenmsg *nfmsg; - struct nfattr *nest_parms; - struct ip_conntrack *ct = (struct ip_conntrack *)ptr; - struct sk_buff *skb; - unsigned int type; - unsigned char *b; - unsigned int flags = 0, group; - - /* ignore our fake conntrack entry */ - if (ct == &ip_conntrack_untracked) - return NOTIFY_DONE; - - if (events & IPCT_DESTROY) { - type = IPCTNL_MSG_CT_DELETE; - group = NFNLGRP_CONNTRACK_DESTROY; - } else if (events & (IPCT_NEW | IPCT_RELATED)) { - type = IPCTNL_MSG_CT_NEW; - flags = NLM_F_CREATE|NLM_F_EXCL; - group = NFNLGRP_CONNTRACK_NEW; - } else if (events & (IPCT_STATUS | IPCT_PROTOINFO)) { - type = IPCTNL_MSG_CT_NEW; - group = NFNLGRP_CONNTRACK_UPDATE; - } else - return NOTIFY_DONE; - - if (!nfnetlink_has_listeners(group)) - return NOTIFY_DONE; - - skb = alloc_skb(NLMSG_GOODSIZE, GFP_ATOMIC); - if (!skb) - return NOTIFY_DONE; - - b = skb->tail; - - type |= NFNL_SUBSYS_CTNETLINK << 8; - nlh = NLMSG_PUT(skb, 0, 0, type, sizeof(struct nfgenmsg)); - nfmsg = NLMSG_DATA(nlh); - - nlh->nlmsg_flags = flags; - nfmsg->nfgen_family = AF_INET; - nfmsg->version = NFNETLINK_V0; - nfmsg->res_id = 0; - - nest_parms = NFA_NEST(skb, CTA_TUPLE_ORIG); - if (ctnetlink_dump_tuples(skb, tuple(ct, IP_CT_DIR_ORIGINAL)) < 0) - goto nfattr_failure; - NFA_NEST_END(skb, nest_parms); - - nest_parms = NFA_NEST(skb, CTA_TUPLE_REPLY); - if (ctnetlink_dump_tuples(skb, tuple(ct, IP_CT_DIR_REPLY)) < 0) - goto nfattr_failure; - NFA_NEST_END(skb, nest_parms); - - if (events & IPCT_DESTROY) { - if (ctnetlink_dump_counters(skb, ct, IP_CT_DIR_ORIGINAL) < 0 || - ctnetlink_dump_counters(skb, ct, IP_CT_DIR_REPLY) < 0) - goto nfattr_failure; - } else { - if (ctnetlink_dump_status(skb, ct) < 0) - goto nfattr_failure; - - if (ctnetlink_dump_timeout(skb, ct) < 0) - goto nfattr_failure; - - if (events & IPCT_PROTOINFO - && ctnetlink_dump_protoinfo(skb, ct) < 0) - goto nfattr_failure; - - if ((events & IPCT_HELPER || ct->helper) - && ctnetlink_dump_helpinfo(skb, ct) < 0) - goto nfattr_failure; - -#ifdef CONFIG_IP_NF_CONNTRACK_MARK - if ((events & IPCT_MARK || ct->mark) - && ctnetlink_dump_mark(skb, ct) < 0) - goto nfattr_failure; -#endif - - if (events & IPCT_COUNTER_FILLING && - (ctnetlink_dump_counters(skb, ct, IP_CT_DIR_ORIGINAL) < 0 || - ctnetlink_dump_counters(skb, ct, IP_CT_DIR_REPLY) < 0)) - goto nfattr_failure; - } - - nlh->nlmsg_len = skb->tail - b; - nfnetlink_send(skb, 0, group, 0); - return NOTIFY_DONE; - -nlmsg_failure: -nfattr_failure: - kfree_skb(skb); - return NOTIFY_DONE; -} -#endif /* CONFIG_IP_NF_CONNTRACK_EVENTS */ - -static int ctnetlink_done(struct netlink_callback *cb) -{ - if (cb->args[1]) - ip_conntrack_put((struct ip_conntrack *)cb->args[1]); - return 0; -} - -static int -ctnetlink_dump_table(struct sk_buff *skb, struct netlink_callback *cb) -{ - struct ip_conntrack *ct, *last; - struct ip_conntrack_tuple_hash *h; - struct list_head *i; - - read_lock_bh(&ip_conntrack_lock); - last = (struct ip_conntrack *)cb->args[1]; - for (; cb->args[0] < ip_conntrack_htable_size; cb->args[0]++) { -restart: - list_for_each_prev(i, &ip_conntrack_hash[cb->args[0]]) { - h = (struct ip_conntrack_tuple_hash *) i; - if (DIRECTION(h) != IP_CT_DIR_ORIGINAL) - continue; - ct = tuplehash_to_ctrack(h); - if (cb->args[1]) { - if (ct != last) - continue; - cb->args[1] = 0; - } - if (ctnetlink_fill_info(skb, NETLINK_CB(cb->skb).pid, - cb->nlh->nlmsg_seq, - IPCTNL_MSG_CT_NEW, - 1, ct) < 0) { - nf_conntrack_get(&ct->ct_general); - cb->args[1] = (unsigned long)ct; - goto out; - } -#ifdef CONFIG_NF_CT_ACCT - if (NFNL_MSG_TYPE(cb->nlh->nlmsg_type) == - IPCTNL_MSG_CT_GET_CTRZERO) - memset(&ct->counters, 0, sizeof(ct->counters)); -#endif - } - if (cb->args[1]) { - cb->args[1] = 0; - goto restart; - } - } -out: - read_unlock_bh(&ip_conntrack_lock); - if (last) - ip_conntrack_put(last); - - return skb->len; -} - -static const size_t cta_min_ip[CTA_IP_MAX] = { - [CTA_IP_V4_SRC-1] = sizeof(__be32), - [CTA_IP_V4_DST-1] = sizeof(__be32), -}; - -static inline int -ctnetlink_parse_tuple_ip(struct nfattr *attr, struct ip_conntrack_tuple *tuple) -{ - struct nfattr *tb[CTA_IP_MAX]; - - nfattr_parse_nested(tb, CTA_IP_MAX, attr); - - if (nfattr_bad_size(tb, CTA_IP_MAX, cta_min_ip)) - return -EINVAL; - - if (!tb[CTA_IP_V4_SRC-1]) - return -EINVAL; - tuple->src.ip = *(__be32 *)NFA_DATA(tb[CTA_IP_V4_SRC-1]); - - if (!tb[CTA_IP_V4_DST-1]) - return -EINVAL; - tuple->dst.ip = *(__be32 *)NFA_DATA(tb[CTA_IP_V4_DST-1]); - - return 0; -} - -static const size_t cta_min_proto[CTA_PROTO_MAX] = { - [CTA_PROTO_NUM-1] = sizeof(u_int8_t), - [CTA_PROTO_SRC_PORT-1] = sizeof(u_int16_t), - [CTA_PROTO_DST_PORT-1] = sizeof(u_int16_t), - [CTA_PROTO_ICMP_TYPE-1] = sizeof(u_int8_t), - [CTA_PROTO_ICMP_CODE-1] = sizeof(u_int8_t), - [CTA_PROTO_ICMP_ID-1] = sizeof(u_int16_t), -}; - -static inline int -ctnetlink_parse_tuple_proto(struct nfattr *attr, - struct ip_conntrack_tuple *tuple) -{ - struct nfattr *tb[CTA_PROTO_MAX]; - struct ip_conntrack_protocol *proto; - int ret = 0; - - nfattr_parse_nested(tb, CTA_PROTO_MAX, attr); - - if (nfattr_bad_size(tb, CTA_PROTO_MAX, cta_min_proto)) - return -EINVAL; - - if (!tb[CTA_PROTO_NUM-1]) - return -EINVAL; - tuple->dst.protonum = *(u_int8_t *)NFA_DATA(tb[CTA_PROTO_NUM-1]); - - proto = ip_conntrack_proto_find_get(tuple->dst.protonum); - - if (likely(proto->nfattr_to_tuple)) - ret = proto->nfattr_to_tuple(tb, tuple); - - ip_conntrack_proto_put(proto); - - return ret; -} - -static inline int -ctnetlink_parse_tuple(struct nfattr *cda[], struct ip_conntrack_tuple *tuple, - enum ctattr_tuple type) -{ - struct nfattr *tb[CTA_TUPLE_MAX]; - int err; - - memset(tuple, 0, sizeof(*tuple)); - - nfattr_parse_nested(tb, CTA_TUPLE_MAX, cda[type-1]); - - if (!tb[CTA_TUPLE_IP-1]) - return -EINVAL; - - err = ctnetlink_parse_tuple_ip(tb[CTA_TUPLE_IP-1], tuple); - if (err < 0) - return err; - - if (!tb[CTA_TUPLE_PROTO-1]) - return -EINVAL; - - err = ctnetlink_parse_tuple_proto(tb[CTA_TUPLE_PROTO-1], tuple); - if (err < 0) - return err; - - /* orig and expect tuples get DIR_ORIGINAL */ - if (type == CTA_TUPLE_REPLY) - tuple->dst.dir = IP_CT_DIR_REPLY; - else - tuple->dst.dir = IP_CT_DIR_ORIGINAL; - - return 0; -} - -#ifdef CONFIG_IP_NF_NAT_NEEDED -static const size_t cta_min_protonat[CTA_PROTONAT_MAX] = { - [CTA_PROTONAT_PORT_MIN-1] = sizeof(u_int16_t), - [CTA_PROTONAT_PORT_MAX-1] = sizeof(u_int16_t), -}; - -static int ctnetlink_parse_nat_proto(struct nfattr *attr, - const struct ip_conntrack *ct, - struct ip_nat_range *range) -{ - struct nfattr *tb[CTA_PROTONAT_MAX]; - struct ip_nat_protocol *npt; - - nfattr_parse_nested(tb, CTA_PROTONAT_MAX, attr); - - if (nfattr_bad_size(tb, CTA_PROTONAT_MAX, cta_min_protonat)) - return -EINVAL; - - npt = ip_nat_proto_find_get(ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.dst.protonum); - - if (!npt->nfattr_to_range) { - ip_nat_proto_put(npt); - return 0; - } - - /* nfattr_to_range returns 1 if it parsed, 0 if not, neg. on error */ - if (npt->nfattr_to_range(tb, range) > 0) - range->flags |= IP_NAT_RANGE_PROTO_SPECIFIED; - - ip_nat_proto_put(npt); - - return 0; -} - -static const size_t cta_min_nat[CTA_NAT_MAX] = { - [CTA_NAT_MINIP-1] = sizeof(__be32), - [CTA_NAT_MAXIP-1] = sizeof(__be32), -}; - -static inline int -ctnetlink_parse_nat(struct nfattr *nat, - const struct ip_conntrack *ct, struct ip_nat_range *range) -{ - struct nfattr *tb[CTA_NAT_MAX]; - int err; - - memset(range, 0, sizeof(*range)); - - nfattr_parse_nested(tb, CTA_NAT_MAX, nat); - - if (nfattr_bad_size(tb, CTA_NAT_MAX, cta_min_nat)) - return -EINVAL; - - if (tb[CTA_NAT_MINIP-1]) - range->min_ip = *(__be32 *)NFA_DATA(tb[CTA_NAT_MINIP-1]); - - if (!tb[CTA_NAT_MAXIP-1]) - range->max_ip = range->min_ip; - else - range->max_ip = *(__be32 *)NFA_DATA(tb[CTA_NAT_MAXIP-1]); - - if (range->min_ip) - range->flags |= IP_NAT_RANGE_MAP_IPS; - - if (!tb[CTA_NAT_PROTO-1]) - return 0; - - err = ctnetlink_parse_nat_proto(tb[CTA_NAT_PROTO-1], ct, range); - if (err < 0) - return err; - - return 0; -} -#endif - -static inline int -ctnetlink_parse_help(struct nfattr *attr, char **helper_name) -{ - struct nfattr *tb[CTA_HELP_MAX]; - - nfattr_parse_nested(tb, CTA_HELP_MAX, attr); - - if (!tb[CTA_HELP_NAME-1]) - return -EINVAL; - - *helper_name = NFA_DATA(tb[CTA_HELP_NAME-1]); - - return 0; -} - -static const size_t cta_min[CTA_MAX] = { - [CTA_STATUS-1] = sizeof(__be32), - [CTA_TIMEOUT-1] = sizeof(__be32), - [CTA_MARK-1] = sizeof(__be32), - [CTA_USE-1] = sizeof(__be32), - [CTA_ID-1] = sizeof(__be32) -}; - -static int -ctnetlink_del_conntrack(struct sock *ctnl, struct sk_buff *skb, - struct nlmsghdr *nlh, struct nfattr *cda[], int *errp) -{ - struct ip_conntrack_tuple_hash *h; - struct ip_conntrack_tuple tuple; - struct ip_conntrack *ct; - int err = 0; - - if (nfattr_bad_size(cda, CTA_MAX, cta_min)) - return -EINVAL; - - if (cda[CTA_TUPLE_ORIG-1]) - err = ctnetlink_parse_tuple(cda, &tuple, CTA_TUPLE_ORIG); - else if (cda[CTA_TUPLE_REPLY-1]) - err = ctnetlink_parse_tuple(cda, &tuple, CTA_TUPLE_REPLY); - else { - /* Flush the whole table */ - ip_conntrack_flush(); - return 0; - } - - if (err < 0) - return err; - - h = ip_conntrack_find_get(&tuple, NULL); - if (!h) - return -ENOENT; - - ct = tuplehash_to_ctrack(h); - - if (cda[CTA_ID-1]) { - u_int32_t id = ntohl(*(__be32 *)NFA_DATA(cda[CTA_ID-1])); - if (ct->id != id) { - ip_conntrack_put(ct); - return -ENOENT; - } - } - if (del_timer(&ct->timeout)) - ct->timeout.function((unsigned long)ct); - - ip_conntrack_put(ct); - - return 0; -} - -static int -ctnetlink_get_conntrack(struct sock *ctnl, struct sk_buff *skb, - struct nlmsghdr *nlh, struct nfattr *cda[], int *errp) -{ - struct ip_conntrack_tuple_hash *h; - struct ip_conntrack_tuple tuple; - struct ip_conntrack *ct; - struct sk_buff *skb2 = NULL; - int err = 0; - - if (nlh->nlmsg_flags & NLM_F_DUMP) { - struct nfgenmsg *msg = NLMSG_DATA(nlh); - u32 rlen; - - if (msg->nfgen_family != AF_INET) - return -EAFNOSUPPORT; - -#ifndef CONFIG_IP_NF_CT_ACCT - if (NFNL_MSG_TYPE(nlh->nlmsg_type) == IPCTNL_MSG_CT_GET_CTRZERO) - return -ENOTSUPP; -#endif - if ((*errp = netlink_dump_start(ctnl, skb, nlh, - ctnetlink_dump_table, - ctnetlink_done)) != 0) - return -EINVAL; - - rlen = NLMSG_ALIGN(nlh->nlmsg_len); - if (rlen > skb->len) - rlen = skb->len; - skb_pull(skb, rlen); - return 0; - } - - if (nfattr_bad_size(cda, CTA_MAX, cta_min)) - return -EINVAL; - - if (cda[CTA_TUPLE_ORIG-1]) - err = ctnetlink_parse_tuple(cda, &tuple, CTA_TUPLE_ORIG); - else if (cda[CTA_TUPLE_REPLY-1]) - err = ctnetlink_parse_tuple(cda, &tuple, CTA_TUPLE_REPLY); - else - return -EINVAL; - - if (err < 0) - return err; - - h = ip_conntrack_find_get(&tuple, NULL); - if (!h) - return -ENOENT; - - ct = tuplehash_to_ctrack(h); - - err = -ENOMEM; - skb2 = alloc_skb(NLMSG_GOODSIZE, GFP_KERNEL); - if (!skb2) { - ip_conntrack_put(ct); - return -ENOMEM; - } - - err = ctnetlink_fill_info(skb2, NETLINK_CB(skb).pid, nlh->nlmsg_seq, - IPCTNL_MSG_CT_NEW, 1, ct); - ip_conntrack_put(ct); - if (err <= 0) - goto free; - - err = netlink_unicast(ctnl, skb2, NETLINK_CB(skb).pid, MSG_DONTWAIT); - if (err < 0) - goto out; - - return 0; - -free: - kfree_skb(skb2); -out: - return err; -} - -static inline int -ctnetlink_change_status(struct ip_conntrack *ct, struct nfattr *cda[]) -{ - unsigned long d; - unsigned status = ntohl(*(__be32 *)NFA_DATA(cda[CTA_STATUS-1])); - d = ct->status ^ status; - - if (d & (IPS_EXPECTED|IPS_CONFIRMED|IPS_DYING)) - /* unchangeable */ - return -EINVAL; - - if (d & IPS_SEEN_REPLY && !(status & IPS_SEEN_REPLY)) - /* SEEN_REPLY bit can only be set */ - return -EINVAL; - - - if (d & IPS_ASSURED && !(status & IPS_ASSURED)) - /* ASSURED bit can only be set */ - return -EINVAL; - - if (cda[CTA_NAT_SRC-1] || cda[CTA_NAT_DST-1]) { -#ifndef CONFIG_IP_NF_NAT_NEEDED - return -EINVAL; -#else - struct ip_nat_range range; - - if (cda[CTA_NAT_DST-1]) { - if (ctnetlink_parse_nat(cda[CTA_NAT_DST-1], ct, - &range) < 0) - return -EINVAL; - if (ip_nat_initialized(ct, - HOOK2MANIP(NF_IP_PRE_ROUTING))) - return -EEXIST; - ip_nat_setup_info(ct, &range, NF_IP_PRE_ROUTING); - } - if (cda[CTA_NAT_SRC-1]) { - if (ctnetlink_parse_nat(cda[CTA_NAT_SRC-1], ct, - &range) < 0) - return -EINVAL; - if (ip_nat_initialized(ct, - HOOK2MANIP(NF_IP_POST_ROUTING))) - return -EEXIST; - ip_nat_setup_info(ct, &range, NF_IP_POST_ROUTING); - } -#endif - } - - /* Be careful here, modifying NAT bits can screw up things, - * so don't let users modify them directly if they don't pass - * ip_nat_range. */ - ct->status |= status & ~(IPS_NAT_DONE_MASK | IPS_NAT_MASK); - return 0; -} - - -static inline int -ctnetlink_change_helper(struct ip_conntrack *ct, struct nfattr *cda[]) -{ - struct ip_conntrack_helper *helper; - char *helpname; - int err; - - /* don't change helper of sibling connections */ - if (ct->master) - return -EINVAL; - - err = ctnetlink_parse_help(cda[CTA_HELP-1], &helpname); - if (err < 0) - return err; - - helper = __ip_conntrack_helper_find_byname(helpname); - if (!helper) { - if (!strcmp(helpname, "")) - helper = NULL; - else - return -EINVAL; - } - - if (ct->helper) { - if (!helper) { - /* we had a helper before ... */ - ip_ct_remove_expectations(ct); - ct->helper = NULL; - } else { - /* need to zero data of old helper */ - memset(&ct->help, 0, sizeof(ct->help)); - } - } - - ct->helper = helper; - - return 0; -} - -static inline int -ctnetlink_change_timeout(struct ip_conntrack *ct, struct nfattr *cda[]) -{ - u_int32_t timeout = ntohl(*(__be32 *)NFA_DATA(cda[CTA_TIMEOUT-1])); - - if (!del_timer(&ct->timeout)) - return -ETIME; - - ct->timeout.expires = jiffies + timeout * HZ; - add_timer(&ct->timeout); - - return 0; -} - -static inline int -ctnetlink_change_protoinfo(struct ip_conntrack *ct, struct nfattr *cda[]) -{ - struct nfattr *tb[CTA_PROTOINFO_MAX], *attr = cda[CTA_PROTOINFO-1]; - struct ip_conntrack_protocol *proto; - u_int16_t npt = ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.dst.protonum; - int err = 0; - - nfattr_parse_nested(tb, CTA_PROTOINFO_MAX, attr); - - proto = ip_conntrack_proto_find_get(npt); - - if (proto->from_nfattr) - err = proto->from_nfattr(tb, ct); - ip_conntrack_proto_put(proto); - - return err; -} - -static int -ctnetlink_change_conntrack(struct ip_conntrack *ct, struct nfattr *cda[]) -{ - int err; - - if (cda[CTA_HELP-1]) { - err = ctnetlink_change_helper(ct, cda); - if (err < 0) - return err; - } - - if (cda[CTA_TIMEOUT-1]) { - err = ctnetlink_change_timeout(ct, cda); - if (err < 0) - return err; - } - - if (cda[CTA_STATUS-1]) { - err = ctnetlink_change_status(ct, cda); - if (err < 0) - return err; - } - - if (cda[CTA_PROTOINFO-1]) { - err = ctnetlink_change_protoinfo(ct, cda); - if (err < 0) - return err; - } - -#if defined(CONFIG_IP_NF_CONNTRACK_MARK) - if (cda[CTA_MARK-1]) - ct->mark = ntohl(*(__be32 *)NFA_DATA(cda[CTA_MARK-1])); -#endif - - return 0; -} - -static int -ctnetlink_create_conntrack(struct nfattr *cda[], - struct ip_conntrack_tuple *otuple, - struct ip_conntrack_tuple *rtuple) -{ - struct ip_conntrack *ct; - int err = -EINVAL; - - ct = ip_conntrack_alloc(otuple, rtuple); - if (ct == NULL || IS_ERR(ct)) - return -ENOMEM; - - if (!cda[CTA_TIMEOUT-1]) - goto err; - ct->timeout.expires = ntohl(*(__be32 *)NFA_DATA(cda[CTA_TIMEOUT-1])); - - ct->timeout.expires = jiffies + ct->timeout.expires * HZ; - ct->status |= IPS_CONFIRMED; - - if (cda[CTA_STATUS-1]) { - err = ctnetlink_change_status(ct, cda); - if (err < 0) - goto err; - } - - if (cda[CTA_PROTOINFO-1]) { - err = ctnetlink_change_protoinfo(ct, cda); - if (err < 0) - goto err; - } - -#if defined(CONFIG_IP_NF_CONNTRACK_MARK) - if (cda[CTA_MARK-1]) - ct->mark = ntohl(*(__be32 *)NFA_DATA(cda[CTA_MARK-1])); -#endif - - ct->helper = ip_conntrack_helper_find_get(rtuple); - - add_timer(&ct->timeout); - ip_conntrack_hash_insert(ct); - - if (ct->helper) - ip_conntrack_helper_put(ct->helper); - - return 0; - -err: - ip_conntrack_free(ct); - return err; -} - -static int -ctnetlink_new_conntrack(struct sock *ctnl, struct sk_buff *skb, - struct nlmsghdr *nlh, struct nfattr *cda[], int *errp) -{ - struct ip_conntrack_tuple otuple, rtuple; - struct ip_conntrack_tuple_hash *h = NULL; - int err = 0; - - if (nfattr_bad_size(cda, CTA_MAX, cta_min)) - return -EINVAL; - - if (cda[CTA_TUPLE_ORIG-1]) { - err = ctnetlink_parse_tuple(cda, &otuple, CTA_TUPLE_ORIG); - if (err < 0) - return err; - } - - if (cda[CTA_TUPLE_REPLY-1]) { - err = ctnetlink_parse_tuple(cda, &rtuple, CTA_TUPLE_REPLY); - if (err < 0) - return err; - } - - write_lock_bh(&ip_conntrack_lock); - if (cda[CTA_TUPLE_ORIG-1]) - h = __ip_conntrack_find(&otuple, NULL); - else if (cda[CTA_TUPLE_REPLY-1]) - h = __ip_conntrack_find(&rtuple, NULL); - - if (h == NULL) { - write_unlock_bh(&ip_conntrack_lock); - err = -ENOENT; - if (nlh->nlmsg_flags & NLM_F_CREATE) - err = ctnetlink_create_conntrack(cda, &otuple, &rtuple); - return err; - } - /* implicit 'else' */ - - /* we only allow nat config for new conntracks */ - if (cda[CTA_NAT_SRC-1] || cda[CTA_NAT_DST-1]) { - err = -EINVAL; - goto out_unlock; - } - - /* We manipulate the conntrack inside the global conntrack table lock, - * so there's no need to increase the refcount */ - err = -EEXIST; - if (!(nlh->nlmsg_flags & NLM_F_EXCL)) - err = ctnetlink_change_conntrack(tuplehash_to_ctrack(h), cda); - -out_unlock: - write_unlock_bh(&ip_conntrack_lock); - return err; -} - -/*********************************************************************** - * EXPECT - ***********************************************************************/ - -static inline int -ctnetlink_exp_dump_tuple(struct sk_buff *skb, - const struct ip_conntrack_tuple *tuple, - enum ctattr_expect type) -{ - struct nfattr *nest_parms = NFA_NEST(skb, type); - - if (ctnetlink_dump_tuples(skb, tuple) < 0) - goto nfattr_failure; - - NFA_NEST_END(skb, nest_parms); - - return 0; - -nfattr_failure: - return -1; -} - -static inline int -ctnetlink_exp_dump_mask(struct sk_buff *skb, - const struct ip_conntrack_tuple *tuple, - const struct ip_conntrack_tuple *mask) -{ - int ret; - struct ip_conntrack_protocol *proto; - struct nfattr *nest_parms = NFA_NEST(skb, CTA_EXPECT_MASK); - - ret = ctnetlink_dump_tuples_ip(skb, mask); - if (unlikely(ret < 0)) - goto nfattr_failure; - - proto = ip_conntrack_proto_find_get(tuple->dst.protonum); - ret = ctnetlink_dump_tuples_proto(skb, mask, proto); - ip_conntrack_proto_put(proto); - if (unlikely(ret < 0)) - goto nfattr_failure; - - NFA_NEST_END(skb, nest_parms); - - return 0; - -nfattr_failure: - return -1; -} - -static inline int -ctnetlink_exp_dump_expect(struct sk_buff *skb, - const struct ip_conntrack_expect *exp) -{ - struct ip_conntrack *master = exp->master; - __be32 timeout = htonl((exp->timeout.expires - jiffies) / HZ); - __be32 id = htonl(exp->id); - - if (ctnetlink_exp_dump_tuple(skb, &exp->tuple, CTA_EXPECT_TUPLE) < 0) - goto nfattr_failure; - if (ctnetlink_exp_dump_mask(skb, &exp->tuple, &exp->mask) < 0) - goto nfattr_failure; - if (ctnetlink_exp_dump_tuple(skb, - &master->tuplehash[IP_CT_DIR_ORIGINAL].tuple, - CTA_EXPECT_MASTER) < 0) - goto nfattr_failure; - - NFA_PUT(skb, CTA_EXPECT_TIMEOUT, sizeof(__be32), &timeout); - NFA_PUT(skb, CTA_EXPECT_ID, sizeof(__be32), &id); - - return 0; - -nfattr_failure: - return -1; -} - -static int -ctnetlink_exp_fill_info(struct sk_buff *skb, u32 pid, u32 seq, - int event, - int nowait, - const struct ip_conntrack_expect *exp) -{ - struct nlmsghdr *nlh; - struct nfgenmsg *nfmsg; - unsigned char *b; - - b = skb->tail; - - event |= NFNL_SUBSYS_CTNETLINK_EXP << 8; - nlh = NLMSG_PUT(skb, pid, seq, event, sizeof(struct nfgenmsg)); - nfmsg = NLMSG_DATA(nlh); - - nlh->nlmsg_flags = (nowait && pid) ? NLM_F_MULTI : 0; - nfmsg->nfgen_family = AF_INET; - nfmsg->version = NFNETLINK_V0; - nfmsg->res_id = 0; - - if (ctnetlink_exp_dump_expect(skb, exp) < 0) - goto nfattr_failure; - - nlh->nlmsg_len = skb->tail - b; - return skb->len; - -nlmsg_failure: -nfattr_failure: - skb_trim(skb, b - skb->data); - return -1; -} - -#ifdef CONFIG_IP_NF_CONNTRACK_EVENTS -static int ctnetlink_expect_event(struct notifier_block *this, - unsigned long events, void *ptr) -{ - struct nlmsghdr *nlh; - struct nfgenmsg *nfmsg; - struct ip_conntrack_expect *exp = (struct ip_conntrack_expect *)ptr; - struct sk_buff *skb; - unsigned int type; - unsigned char *b; - int flags = 0; - - if (events & IPEXP_NEW) { - type = IPCTNL_MSG_EXP_NEW; - flags = NLM_F_CREATE|NLM_F_EXCL; - } else - return NOTIFY_DONE; - - if (!nfnetlink_has_listeners(NFNLGRP_CONNTRACK_EXP_NEW)) - return NOTIFY_DONE; - - skb = alloc_skb(NLMSG_GOODSIZE, GFP_ATOMIC); - if (!skb) - return NOTIFY_DONE; - - b = skb->tail; - - type |= NFNL_SUBSYS_CTNETLINK_EXP << 8; - nlh = NLMSG_PUT(skb, 0, 0, type, sizeof(struct nfgenmsg)); - nfmsg = NLMSG_DATA(nlh); - - nlh->nlmsg_flags = flags; - nfmsg->nfgen_family = AF_INET; - nfmsg->version = NFNETLINK_V0; - nfmsg->res_id = 0; - - if (ctnetlink_exp_dump_expect(skb, exp) < 0) - goto nfattr_failure; - - nlh->nlmsg_len = skb->tail - b; - nfnetlink_send(skb, 0, NFNLGRP_CONNTRACK_EXP_NEW, 0); - return NOTIFY_DONE; - -nlmsg_failure: -nfattr_failure: - kfree_skb(skb); - return NOTIFY_DONE; -} -#endif - -static int -ctnetlink_exp_dump_table(struct sk_buff *skb, struct netlink_callback *cb) -{ - struct ip_conntrack_expect *exp = NULL; - struct list_head *i; - u_int32_t *id = (u_int32_t *) &cb->args[0]; - - read_lock_bh(&ip_conntrack_lock); - list_for_each_prev(i, &ip_conntrack_expect_list) { - exp = (struct ip_conntrack_expect *) i; - if (exp->id <= *id) - continue; - if (ctnetlink_exp_fill_info(skb, NETLINK_CB(cb->skb).pid, - cb->nlh->nlmsg_seq, - IPCTNL_MSG_EXP_NEW, - 1, exp) < 0) - goto out; - *id = exp->id; - } -out: - read_unlock_bh(&ip_conntrack_lock); - - return skb->len; -} - -static const size_t cta_min_exp[CTA_EXPECT_MAX] = { - [CTA_EXPECT_TIMEOUT-1] = sizeof(__be32), - [CTA_EXPECT_ID-1] = sizeof(__be32) -}; - -static int -ctnetlink_get_expect(struct sock *ctnl, struct sk_buff *skb, - struct nlmsghdr *nlh, struct nfattr *cda[], int *errp) -{ - struct ip_conntrack_tuple tuple; - struct ip_conntrack_expect *exp; - struct sk_buff *skb2; - int err = 0; - - if (nfattr_bad_size(cda, CTA_EXPECT_MAX, cta_min_exp)) - return -EINVAL; - - if (nlh->nlmsg_flags & NLM_F_DUMP) { - struct nfgenmsg *msg = NLMSG_DATA(nlh); - u32 rlen; - - if (msg->nfgen_family != AF_INET) - return -EAFNOSUPPORT; - - if ((*errp = netlink_dump_start(ctnl, skb, nlh, - ctnetlink_exp_dump_table, - ctnetlink_done)) != 0) - return -EINVAL; - rlen = NLMSG_ALIGN(nlh->nlmsg_len); - if (rlen > skb->len) - rlen = skb->len; - skb_pull(skb, rlen); - return 0; - } - - if (cda[CTA_EXPECT_MASTER-1]) - err = ctnetlink_parse_tuple(cda, &tuple, CTA_EXPECT_MASTER); - else - return -EINVAL; - - if (err < 0) - return err; - - exp = ip_conntrack_expect_find_get(&tuple); - if (!exp) - return -ENOENT; - - if (cda[CTA_EXPECT_ID-1]) { - __be32 id = *(__be32 *)NFA_DATA(cda[CTA_EXPECT_ID-1]); - if (exp->id != ntohl(id)) { - ip_conntrack_expect_put(exp); - return -ENOENT; - } - } - - err = -ENOMEM; - skb2 = alloc_skb(NLMSG_GOODSIZE, GFP_KERNEL); - if (!skb2) - goto out; - - err = ctnetlink_exp_fill_info(skb2, NETLINK_CB(skb).pid, - nlh->nlmsg_seq, IPCTNL_MSG_EXP_NEW, - 1, exp); - if (err <= 0) - goto free; - - ip_conntrack_expect_put(exp); - - return netlink_unicast(ctnl, skb2, NETLINK_CB(skb).pid, MSG_DONTWAIT); - -free: - kfree_skb(skb2); -out: - ip_conntrack_expect_put(exp); - return err; -} - -static int -ctnetlink_del_expect(struct sock *ctnl, struct sk_buff *skb, - struct nlmsghdr *nlh, struct nfattr *cda[], int *errp) -{ - struct ip_conntrack_expect *exp, *tmp; - struct ip_conntrack_tuple tuple; - struct ip_conntrack_helper *h; - int err; - - if (nfattr_bad_size(cda, CTA_EXPECT_MAX, cta_min_exp)) - return -EINVAL; - - if (cda[CTA_EXPECT_TUPLE-1]) { - /* delete a single expect by tuple */ - err = ctnetlink_parse_tuple(cda, &tuple, CTA_EXPECT_TUPLE); - if (err < 0) - return err; - - /* bump usage count to 2 */ - exp = ip_conntrack_expect_find_get(&tuple); - if (!exp) - return -ENOENT; - - if (cda[CTA_EXPECT_ID-1]) { - __be32 id = - *(__be32 *)NFA_DATA(cda[CTA_EXPECT_ID-1]); - if (exp->id != ntohl(id)) { - ip_conntrack_expect_put(exp); - return -ENOENT; - } - } - - /* after list removal, usage count == 1 */ - ip_conntrack_unexpect_related(exp); - /* have to put what we 'get' above. - * after this line usage count == 0 */ - ip_conntrack_expect_put(exp); - } else if (cda[CTA_EXPECT_HELP_NAME-1]) { - char *name = NFA_DATA(cda[CTA_EXPECT_HELP_NAME-1]); - - /* delete all expectations for this helper */ - write_lock_bh(&ip_conntrack_lock); - h = __ip_conntrack_helper_find_byname(name); - if (!h) { - write_unlock_bh(&ip_conntrack_lock); - return -EINVAL; - } - list_for_each_entry_safe(exp, tmp, &ip_conntrack_expect_list, - list) { - if (exp->master->helper == h - && del_timer(&exp->timeout)) { - ip_ct_unlink_expect(exp); - ip_conntrack_expect_put(exp); - } - } - write_unlock_bh(&ip_conntrack_lock); - } else { - /* This basically means we have to flush everything*/ - write_lock_bh(&ip_conntrack_lock); - list_for_each_entry_safe(exp, tmp, &ip_conntrack_expect_list, - list) { - if (del_timer(&exp->timeout)) { - ip_ct_unlink_expect(exp); - ip_conntrack_expect_put(exp); - } - } - write_unlock_bh(&ip_conntrack_lock); - } - - return 0; -} -static int -ctnetlink_change_expect(struct ip_conntrack_expect *x, struct nfattr *cda[]) -{ - return -EOPNOTSUPP; -} - -static int -ctnetlink_create_expect(struct nfattr *cda[]) -{ - struct ip_conntrack_tuple tuple, mask, master_tuple; - struct ip_conntrack_tuple_hash *h = NULL; - struct ip_conntrack_expect *exp; - struct ip_conntrack *ct; - int err = 0; - - /* caller guarantees that those three CTA_EXPECT_* exist */ - err = ctnetlink_parse_tuple(cda, &tuple, CTA_EXPECT_TUPLE); - if (err < 0) - return err; - err = ctnetlink_parse_tuple(cda, &mask, CTA_EXPECT_MASK); - if (err < 0) - return err; - err = ctnetlink_parse_tuple(cda, &master_tuple, CTA_EXPECT_MASTER); - if (err < 0) - return err; - - /* Look for master conntrack of this expectation */ - h = ip_conntrack_find_get(&master_tuple, NULL); - if (!h) - return -ENOENT; - ct = tuplehash_to_ctrack(h); - - if (!ct->helper) { - /* such conntrack hasn't got any helper, abort */ - err = -EINVAL; - goto out; - } - - exp = ip_conntrack_expect_alloc(ct); - if (!exp) { - err = -ENOMEM; - goto out; - } - - exp->expectfn = NULL; - exp->flags = 0; - exp->master = ct; - memcpy(&exp->tuple, &tuple, sizeof(struct ip_conntrack_tuple)); - memcpy(&exp->mask, &mask, sizeof(struct ip_conntrack_tuple)); - - err = ip_conntrack_expect_related(exp); - ip_conntrack_expect_put(exp); - -out: - ip_conntrack_put(tuplehash_to_ctrack(h)); - return err; -} - -static int -ctnetlink_new_expect(struct sock *ctnl, struct sk_buff *skb, - struct nlmsghdr *nlh, struct nfattr *cda[], int *errp) -{ - struct ip_conntrack_tuple tuple; - struct ip_conntrack_expect *exp; - int err = 0; - - if (nfattr_bad_size(cda, CTA_EXPECT_MAX, cta_min_exp)) - return -EINVAL; - - if (!cda[CTA_EXPECT_TUPLE-1] - || !cda[CTA_EXPECT_MASK-1] - || !cda[CTA_EXPECT_MASTER-1]) - return -EINVAL; - - err = ctnetlink_parse_tuple(cda, &tuple, CTA_EXPECT_TUPLE); - if (err < 0) - return err; - - write_lock_bh(&ip_conntrack_lock); - exp = __ip_conntrack_expect_find(&tuple); - - if (!exp) { - write_unlock_bh(&ip_conntrack_lock); - err = -ENOENT; - if (nlh->nlmsg_flags & NLM_F_CREATE) - err = ctnetlink_create_expect(cda); - return err; - } - - err = -EEXIST; - if (!(nlh->nlmsg_flags & NLM_F_EXCL)) - err = ctnetlink_change_expect(exp, cda); - write_unlock_bh(&ip_conntrack_lock); - - return err; -} - -#ifdef CONFIG_IP_NF_CONNTRACK_EVENTS -static struct notifier_block ctnl_notifier = { - .notifier_call = ctnetlink_conntrack_event, -}; - -static struct notifier_block ctnl_notifier_exp = { - .notifier_call = ctnetlink_expect_event, -}; -#endif - -static struct nfnl_callback ctnl_cb[IPCTNL_MSG_MAX] = { - [IPCTNL_MSG_CT_NEW] = { .call = ctnetlink_new_conntrack, - .attr_count = CTA_MAX, }, - [IPCTNL_MSG_CT_GET] = { .call = ctnetlink_get_conntrack, - .attr_count = CTA_MAX, }, - [IPCTNL_MSG_CT_DELETE] = { .call = ctnetlink_del_conntrack, - .attr_count = CTA_MAX, }, - [IPCTNL_MSG_CT_GET_CTRZERO] = { .call = ctnetlink_get_conntrack, - .attr_count = CTA_MAX, }, -}; - -static struct nfnl_callback ctnl_exp_cb[IPCTNL_MSG_EXP_MAX] = { - [IPCTNL_MSG_EXP_GET] = { .call = ctnetlink_get_expect, - .attr_count = CTA_EXPECT_MAX, }, - [IPCTNL_MSG_EXP_NEW] = { .call = ctnetlink_new_expect, - .attr_count = CTA_EXPECT_MAX, }, - [IPCTNL_MSG_EXP_DELETE] = { .call = ctnetlink_del_expect, - .attr_count = CTA_EXPECT_MAX, }, -}; - -static struct nfnetlink_subsystem ctnl_subsys = { - .name = "conntrack", - .subsys_id = NFNL_SUBSYS_CTNETLINK, - .cb_count = IPCTNL_MSG_MAX, - .cb = ctnl_cb, -}; - -static struct nfnetlink_subsystem ctnl_exp_subsys = { - .name = "conntrack_expect", - .subsys_id = NFNL_SUBSYS_CTNETLINK_EXP, - .cb_count = IPCTNL_MSG_EXP_MAX, - .cb = ctnl_exp_cb, -}; - -MODULE_ALIAS_NFNL_SUBSYS(NFNL_SUBSYS_CTNETLINK); -MODULE_ALIAS_NFNL_SUBSYS(NFNL_SUBSYS_CTNETLINK_EXP); - -static int __init ctnetlink_init(void) -{ - int ret; - - printk("ctnetlink v%s: registering with nfnetlink.\n", version); - ret = nfnetlink_subsys_register(&ctnl_subsys); - if (ret < 0) { - printk("ctnetlink_init: cannot register with nfnetlink.\n"); - goto err_out; - } - - ret = nfnetlink_subsys_register(&ctnl_exp_subsys); - if (ret < 0) { - printk("ctnetlink_init: cannot register exp with nfnetlink.\n"); - goto err_unreg_subsys; - } - -#ifdef CONFIG_IP_NF_CONNTRACK_EVENTS - ret = ip_conntrack_register_notifier(&ctnl_notifier); - if (ret < 0) { - printk("ctnetlink_init: cannot register notifier.\n"); - goto err_unreg_exp_subsys; - } - - ret = ip_conntrack_expect_register_notifier(&ctnl_notifier_exp); - if (ret < 0) { - printk("ctnetlink_init: cannot expect register notifier.\n"); - goto err_unreg_notifier; - } -#endif - - return 0; - -#ifdef CONFIG_IP_NF_CONNTRACK_EVENTS -err_unreg_notifier: - ip_conntrack_unregister_notifier(&ctnl_notifier); -err_unreg_exp_subsys: - nfnetlink_subsys_unregister(&ctnl_exp_subsys); -#endif -err_unreg_subsys: - nfnetlink_subsys_unregister(&ctnl_subsys); -err_out: - return ret; -} - -static void __exit ctnetlink_exit(void) -{ - printk("ctnetlink: unregistering from nfnetlink.\n"); - -#ifdef CONFIG_IP_NF_CONNTRACK_EVENTS - ip_conntrack_expect_unregister_notifier(&ctnl_notifier_exp); - ip_conntrack_unregister_notifier(&ctnl_notifier); -#endif - - nfnetlink_subsys_unregister(&ctnl_exp_subsys); - nfnetlink_subsys_unregister(&ctnl_subsys); - return; -} - -module_init(ctnetlink_init); -module_exit(ctnetlink_exit); diff -puN net/ipv4/netfilter/ip_conntrack_proto_generic.c~git-net /dev/null --- a/net/ipv4/netfilter/ip_conntrack_proto_generic.c +++ /dev/null @@ -1,74 +0,0 @@ -/* (C) 1999-2001 Paul `Rusty' Russell - * (C) 2002-2004 Netfilter Core Team - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - */ - -#include -#include -#include -#include - -unsigned int ip_ct_generic_timeout __read_mostly = 600*HZ; - -static int generic_pkt_to_tuple(const struct sk_buff *skb, - unsigned int dataoff, - struct ip_conntrack_tuple *tuple) -{ - tuple->src.u.all = 0; - tuple->dst.u.all = 0; - - return 1; -} - -static int generic_invert_tuple(struct ip_conntrack_tuple *tuple, - const struct ip_conntrack_tuple *orig) -{ - tuple->src.u.all = 0; - tuple->dst.u.all = 0; - - return 1; -} - -/* Print out the per-protocol part of the tuple. */ -static int generic_print_tuple(struct seq_file *s, - const struct ip_conntrack_tuple *tuple) -{ - return 0; -} - -/* Print out the private part of the conntrack. */ -static int generic_print_conntrack(struct seq_file *s, - const struct ip_conntrack *state) -{ - return 0; -} - -/* Returns verdict for packet, or -1 for invalid. */ -static int packet(struct ip_conntrack *conntrack, - const struct sk_buff *skb, - enum ip_conntrack_info ctinfo) -{ - ip_ct_refresh_acct(conntrack, ctinfo, skb, ip_ct_generic_timeout); - return NF_ACCEPT; -} - -/* Called when a new connection for this protocol found. */ -static int new(struct ip_conntrack *conntrack, const struct sk_buff *skb) -{ - return 1; -} - -struct ip_conntrack_protocol ip_conntrack_generic_protocol = -{ - .proto = 0, - .name = "unknown", - .pkt_to_tuple = generic_pkt_to_tuple, - .invert_tuple = generic_invert_tuple, - .print_tuple = generic_print_tuple, - .print_conntrack = generic_print_conntrack, - .packet = packet, - .new = new, -}; diff -puN net/ipv4/netfilter/ip_conntrack_proto_gre.c~git-net /dev/null --- a/net/ipv4/netfilter/ip_conntrack_proto_gre.c +++ /dev/null @@ -1,328 +0,0 @@ -/* - * ip_conntrack_proto_gre.c - Version 3.0 - * - * Connection tracking protocol helper module for GRE. - * - * GRE is a generic encapsulation protocol, which is generally not very - * suited for NAT, as it has no protocol-specific part as port numbers. - * - * It has an optional key field, which may help us distinguishing two - * connections between the same two hosts. - * - * GRE is defined in RFC 1701 and RFC 1702, as well as RFC 2784 - * - * PPTP is built on top of a modified version of GRE, and has a mandatory - * field called "CallID", which serves us for the same purpose as the key - * field in plain GRE. - * - * Documentation about PPTP can be found in RFC 2637 - * - * (C) 2000-2005 by Harald Welte - * - * Development of this code funded by Astaro AG (http://www.astaro.com/) - * - */ - -#include -#include -#include -#include -#include -#include -#include -#include -#include - -static DEFINE_RWLOCK(ip_ct_gre_lock); - -#include -#include -#include - -#include -#include - -MODULE_LICENSE("GPL"); -MODULE_AUTHOR("Harald Welte "); -MODULE_DESCRIPTION("netfilter connection tracking protocol helper for GRE"); - -/* shamelessly stolen from ip_conntrack_proto_udp.c */ -#define GRE_TIMEOUT (30*HZ) -#define GRE_STREAM_TIMEOUT (180*HZ) - -#if 0 -#define DEBUGP(format, args...) printk(KERN_DEBUG "%s:%s: " format, __FILE__, __FUNCTION__, ## args) -#define DUMP_TUPLE_GRE(x) printk("%u.%u.%u.%u:0x%x -> %u.%u.%u.%u:0x%x\n", \ - NIPQUAD((x)->src.ip), ntohs((x)->src.u.gre.key), \ - NIPQUAD((x)->dst.ip), ntohs((x)->dst.u.gre.key)) -#else -#define DEBUGP(x, args...) -#define DUMP_TUPLE_GRE(x) -#endif - -/* GRE KEYMAP HANDLING FUNCTIONS */ -static LIST_HEAD(gre_keymap_list); - -static inline int gre_key_cmpfn(const struct ip_ct_gre_keymap *km, - const struct ip_conntrack_tuple *t) -{ - return ((km->tuple.src.ip == t->src.ip) && - (km->tuple.dst.ip == t->dst.ip) && - (km->tuple.dst.protonum == t->dst.protonum) && - (km->tuple.dst.u.all == t->dst.u.all)); -} - -/* look up the source key for a given tuple */ -static __be16 gre_keymap_lookup(struct ip_conntrack_tuple *t) -{ - struct ip_ct_gre_keymap *km; - __be16 key = 0; - - read_lock_bh(&ip_ct_gre_lock); - list_for_each_entry(km, &gre_keymap_list, list) { - if (gre_key_cmpfn(km, t)) { - key = km->tuple.src.u.gre.key; - break; - } - } - read_unlock_bh(&ip_ct_gre_lock); - - DEBUGP("lookup src key 0x%x up key for ", key); - DUMP_TUPLE_GRE(t); - - return key; -} - -/* add a single keymap entry, associate with specified master ct */ -int -ip_ct_gre_keymap_add(struct ip_conntrack *ct, - struct ip_conntrack_tuple *t, int reply) -{ - struct ip_ct_gre_keymap **exist_km, *km; - - if (!ct->helper || strcmp(ct->helper->name, "pptp")) { - DEBUGP("refusing to add GRE keymap to non-pptp session\n"); - return -1; - } - - if (!reply) - exist_km = &ct->help.ct_pptp_info.keymap_orig; - else - exist_km = &ct->help.ct_pptp_info.keymap_reply; - - if (*exist_km) { - /* check whether it's a retransmission */ - list_for_each_entry(km, &gre_keymap_list, list) { - if (gre_key_cmpfn(km, t) && km == *exist_km) - return 0; - } - DEBUGP("trying to override keymap_%s for ct %p\n", - reply? "reply":"orig", ct); - return -EEXIST; - } - - km = kmalloc(sizeof(*km), GFP_ATOMIC); - if (!km) - return -ENOMEM; - - memcpy(&km->tuple, t, sizeof(*t)); - *exist_km = km; - - DEBUGP("adding new entry %p: ", km); - DUMP_TUPLE_GRE(&km->tuple); - - write_lock_bh(&ip_ct_gre_lock); - list_add_tail(&km->list, &gre_keymap_list); - write_unlock_bh(&ip_ct_gre_lock); - - return 0; -} - -/* destroy the keymap entries associated with specified master ct */ -void ip_ct_gre_keymap_destroy(struct ip_conntrack *ct) -{ - DEBUGP("entering for ct %p\n", ct); - - if (!ct->helper || strcmp(ct->helper->name, "pptp")) { - DEBUGP("refusing to destroy GRE keymap to non-pptp session\n"); - return; - } - - write_lock_bh(&ip_ct_gre_lock); - if (ct->help.ct_pptp_info.keymap_orig) { - DEBUGP("removing %p from list\n", - ct->help.ct_pptp_info.keymap_orig); - list_del(&ct->help.ct_pptp_info.keymap_orig->list); - kfree(ct->help.ct_pptp_info.keymap_orig); - ct->help.ct_pptp_info.keymap_orig = NULL; - } - if (ct->help.ct_pptp_info.keymap_reply) { - DEBUGP("removing %p from list\n", - ct->help.ct_pptp_info.keymap_reply); - list_del(&ct->help.ct_pptp_info.keymap_reply->list); - kfree(ct->help.ct_pptp_info.keymap_reply); - ct->help.ct_pptp_info.keymap_reply = NULL; - } - write_unlock_bh(&ip_ct_gre_lock); -} - - -/* PUBLIC CONNTRACK PROTO HELPER FUNCTIONS */ - -/* invert gre part of tuple */ -static int gre_invert_tuple(struct ip_conntrack_tuple *tuple, - const struct ip_conntrack_tuple *orig) -{ - tuple->dst.u.gre.key = orig->src.u.gre.key; - tuple->src.u.gre.key = orig->dst.u.gre.key; - - return 1; -} - -/* gre hdr info to tuple */ -static int gre_pkt_to_tuple(const struct sk_buff *skb, - unsigned int dataoff, - struct ip_conntrack_tuple *tuple) -{ - struct gre_hdr_pptp _pgrehdr, *pgrehdr; - __be16 srckey; - struct gre_hdr _grehdr, *grehdr; - - /* first only delinearize old RFC1701 GRE header */ - grehdr = skb_header_pointer(skb, dataoff, sizeof(_grehdr), &_grehdr); - if (!grehdr || grehdr->version != GRE_VERSION_PPTP) { - /* try to behave like "ip_conntrack_proto_generic" */ - tuple->src.u.all = 0; - tuple->dst.u.all = 0; - return 1; - } - - /* PPTP header is variable length, only need up to the call_id field */ - pgrehdr = skb_header_pointer(skb, dataoff, 8, &_pgrehdr); - if (!pgrehdr) - return 1; - - if (ntohs(grehdr->protocol) != GRE_PROTOCOL_PPTP) { - DEBUGP("GRE_VERSION_PPTP but unknown proto\n"); - return 0; - } - - tuple->dst.u.gre.key = pgrehdr->call_id; - srckey = gre_keymap_lookup(tuple); - tuple->src.u.gre.key = srckey; - - return 1; -} - -/* print gre part of tuple */ -static int gre_print_tuple(struct seq_file *s, - const struct ip_conntrack_tuple *tuple) -{ - return seq_printf(s, "srckey=0x%x dstkey=0x%x ", - ntohs(tuple->src.u.gre.key), - ntohs(tuple->dst.u.gre.key)); -} - -/* print private data for conntrack */ -static int gre_print_conntrack(struct seq_file *s, - const struct ip_conntrack *ct) -{ - return seq_printf(s, "timeout=%u, stream_timeout=%u ", - (ct->proto.gre.timeout / HZ), - (ct->proto.gre.stream_timeout / HZ)); -} - -/* Returns verdict for packet, and may modify conntrack */ -static int gre_packet(struct ip_conntrack *ct, - const struct sk_buff *skb, - enum ip_conntrack_info conntrackinfo) -{ - /* If we've seen traffic both ways, this is a GRE connection. - * Extend timeout. */ - if (ct->status & IPS_SEEN_REPLY) { - ip_ct_refresh_acct(ct, conntrackinfo, skb, - ct->proto.gre.stream_timeout); - /* Also, more likely to be important, and not a probe. */ - set_bit(IPS_ASSURED_BIT, &ct->status); - ip_conntrack_event_cache(IPCT_STATUS, skb); - } else - ip_ct_refresh_acct(ct, conntrackinfo, skb, - ct->proto.gre.timeout); - - return NF_ACCEPT; -} - -/* Called when a new connection for this protocol found. */ -static int gre_new(struct ip_conntrack *ct, - const struct sk_buff *skb) -{ - DEBUGP(": "); - DUMP_TUPLE_GRE(&ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple); - - /* initialize to sane value. Ideally a conntrack helper - * (e.g. in case of pptp) is increasing them */ - ct->proto.gre.stream_timeout = GRE_STREAM_TIMEOUT; - ct->proto.gre.timeout = GRE_TIMEOUT; - - return 1; -} - -/* Called when a conntrack entry has already been removed from the hashes - * and is about to be deleted from memory */ -static void gre_destroy(struct ip_conntrack *ct) -{ - struct ip_conntrack *master = ct->master; - DEBUGP(" entering\n"); - - if (!master) - DEBUGP("no master !?!\n"); - else - ip_ct_gre_keymap_destroy(master); -} - -/* protocol helper struct */ -static struct ip_conntrack_protocol gre = { - .proto = IPPROTO_GRE, - .name = "gre", - .pkt_to_tuple = gre_pkt_to_tuple, - .invert_tuple = gre_invert_tuple, - .print_tuple = gre_print_tuple, - .print_conntrack = gre_print_conntrack, - .packet = gre_packet, - .new = gre_new, - .destroy = gre_destroy, - .me = THIS_MODULE, -#if defined(CONFIG_IP_NF_CONNTRACK_NETLINK) || \ - defined(CONFIG_IP_NF_CONNTRACK_NETLINK_MODULE) - .tuple_to_nfattr = ip_ct_port_tuple_to_nfattr, - .nfattr_to_tuple = ip_ct_port_nfattr_to_tuple, -#endif -}; - -/* ip_conntrack_proto_gre initialization */ -int __init ip_ct_proto_gre_init(void) -{ - return ip_conntrack_protocol_register(&gre); -} - -/* This cannot be __exit, as it is invoked from ip_conntrack_helper_pptp.c's - * init() code on errors. - */ -void ip_ct_proto_gre_fini(void) -{ - struct list_head *pos, *n; - - /* delete all keymap entries */ - write_lock_bh(&ip_ct_gre_lock); - list_for_each_safe(pos, n, &gre_keymap_list) { - DEBUGP("deleting keymap %p at module unload time\n", pos); - list_del(pos); - kfree(pos); - } - write_unlock_bh(&ip_ct_gre_lock); - - ip_conntrack_protocol_unregister(&gre); -} - -EXPORT_SYMBOL(ip_ct_gre_keymap_add); -EXPORT_SYMBOL(ip_ct_gre_keymap_destroy); diff -puN net/ipv4/netfilter/ip_conntrack_proto_icmp.c~git-net /dev/null --- a/net/ipv4/netfilter/ip_conntrack_proto_icmp.c +++ /dev/null @@ -1,315 +0,0 @@ -/* (C) 1999-2001 Paul `Rusty' Russell - * (C) 2002-2004 Netfilter Core Team - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - */ - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -unsigned int ip_ct_icmp_timeout __read_mostly = 30*HZ; - -#if 0 -#define DEBUGP printk -#else -#define DEBUGP(format, args...) -#endif - -static int icmp_pkt_to_tuple(const struct sk_buff *skb, - unsigned int dataoff, - struct ip_conntrack_tuple *tuple) -{ - struct icmphdr _hdr, *hp; - - hp = skb_header_pointer(skb, dataoff, sizeof(_hdr), &_hdr); - if (hp == NULL) - return 0; - - tuple->dst.u.icmp.type = hp->type; - tuple->src.u.icmp.id = hp->un.echo.id; - tuple->dst.u.icmp.code = hp->code; - - return 1; -} - -/* Add 1; spaces filled with 0. */ -static const u_int8_t invmap[] = { - [ICMP_ECHO] = ICMP_ECHOREPLY + 1, - [ICMP_ECHOREPLY] = ICMP_ECHO + 1, - [ICMP_TIMESTAMP] = ICMP_TIMESTAMPREPLY + 1, - [ICMP_TIMESTAMPREPLY] = ICMP_TIMESTAMP + 1, - [ICMP_INFO_REQUEST] = ICMP_INFO_REPLY + 1, - [ICMP_INFO_REPLY] = ICMP_INFO_REQUEST + 1, - [ICMP_ADDRESS] = ICMP_ADDRESSREPLY + 1, - [ICMP_ADDRESSREPLY] = ICMP_ADDRESS + 1 -}; - -static int icmp_invert_tuple(struct ip_conntrack_tuple *tuple, - const struct ip_conntrack_tuple *orig) -{ - if (orig->dst.u.icmp.type >= sizeof(invmap) - || !invmap[orig->dst.u.icmp.type]) - return 0; - - tuple->src.u.icmp.id = orig->src.u.icmp.id; - tuple->dst.u.icmp.type = invmap[orig->dst.u.icmp.type] - 1; - tuple->dst.u.icmp.code = orig->dst.u.icmp.code; - return 1; -} - -/* Print out the per-protocol part of the tuple. */ -static int icmp_print_tuple(struct seq_file *s, - const struct ip_conntrack_tuple *tuple) -{ - return seq_printf(s, "type=%u code=%u id=%u ", - tuple->dst.u.icmp.type, - tuple->dst.u.icmp.code, - ntohs(tuple->src.u.icmp.id)); -} - -/* Print out the private part of the conntrack. */ -static int icmp_print_conntrack(struct seq_file *s, - const struct ip_conntrack *conntrack) -{ - return 0; -} - -/* Returns verdict for packet, or -1 for invalid. */ -static int icmp_packet(struct ip_conntrack *ct, - const struct sk_buff *skb, - enum ip_conntrack_info ctinfo) -{ - /* Try to delete connection immediately after all replies: - won't actually vanish as we still have skb, and del_timer - means this will only run once even if count hits zero twice - (theoretically possible with SMP) */ - if (CTINFO2DIR(ctinfo) == IP_CT_DIR_REPLY) { - if (atomic_dec_and_test(&ct->proto.icmp.count) - && del_timer(&ct->timeout)) - ct->timeout.function((unsigned long)ct); - } else { - atomic_inc(&ct->proto.icmp.count); - ip_conntrack_event_cache(IPCT_PROTOINFO_VOLATILE, skb); - ip_ct_refresh_acct(ct, ctinfo, skb, ip_ct_icmp_timeout); - } - - return NF_ACCEPT; -} - -/* Called when a new connection for this protocol found. */ -static int icmp_new(struct ip_conntrack *conntrack, - const struct sk_buff *skb) -{ - static const u_int8_t valid_new[] = { - [ICMP_ECHO] = 1, - [ICMP_TIMESTAMP] = 1, - [ICMP_INFO_REQUEST] = 1, - [ICMP_ADDRESS] = 1 - }; - - if (conntrack->tuplehash[0].tuple.dst.u.icmp.type >= sizeof(valid_new) - || !valid_new[conntrack->tuplehash[0].tuple.dst.u.icmp.type]) { - /* Can't create a new ICMP `conn' with this. */ - DEBUGP("icmp: can't create new conn with type %u\n", - conntrack->tuplehash[0].tuple.dst.u.icmp.type); - DUMP_TUPLE(&conntrack->tuplehash[0].tuple); - return 0; - } - atomic_set(&conntrack->proto.icmp.count, 0); - return 1; -} - -static int -icmp_error_message(struct sk_buff *skb, - enum ip_conntrack_info *ctinfo, - unsigned int hooknum) -{ - struct ip_conntrack_tuple innertuple, origtuple; - struct { - struct icmphdr icmp; - struct iphdr ip; - } _in, *inside; - struct ip_conntrack_protocol *innerproto; - struct ip_conntrack_tuple_hash *h; - int dataoff; - - IP_NF_ASSERT(skb->nfct == NULL); - - /* Not enough header? */ - inside = skb_header_pointer(skb, skb->nh.iph->ihl*4, sizeof(_in), &_in); - if (inside == NULL) - return -NF_ACCEPT; - - /* Ignore ICMP's containing fragments (shouldn't happen) */ - if (inside->ip.frag_off & htons(IP_OFFSET)) { - DEBUGP("icmp_error_track: fragment of proto %u\n", - inside->ip.protocol); - return -NF_ACCEPT; - } - - innerproto = ip_conntrack_proto_find_get(inside->ip.protocol); - dataoff = skb->nh.iph->ihl*4 + sizeof(inside->icmp) + inside->ip.ihl*4; - /* Are they talking about one of our connections? */ - if (!ip_ct_get_tuple(&inside->ip, skb, dataoff, &origtuple, innerproto)) { - DEBUGP("icmp_error: ! get_tuple p=%u", inside->ip.protocol); - ip_conntrack_proto_put(innerproto); - return -NF_ACCEPT; - } - - /* Ordinarily, we'd expect the inverted tupleproto, but it's - been preserved inside the ICMP. */ - if (!ip_ct_invert_tuple(&innertuple, &origtuple, innerproto)) { - DEBUGP("icmp_error_track: Can't invert tuple\n"); - ip_conntrack_proto_put(innerproto); - return -NF_ACCEPT; - } - ip_conntrack_proto_put(innerproto); - - *ctinfo = IP_CT_RELATED; - - h = ip_conntrack_find_get(&innertuple, NULL); - if (!h) { - /* Locally generated ICMPs will match inverted if they - haven't been SNAT'ed yet */ - /* FIXME: NAT code has to handle half-done double NAT --RR */ - if (hooknum == NF_IP_LOCAL_OUT) - h = ip_conntrack_find_get(&origtuple, NULL); - - if (!h) { - DEBUGP("icmp_error_track: no match\n"); - return -NF_ACCEPT; - } - /* Reverse direction from that found */ - if (DIRECTION(h) != IP_CT_DIR_REPLY) - *ctinfo += IP_CT_IS_REPLY; - } else { - if (DIRECTION(h) == IP_CT_DIR_REPLY) - *ctinfo += IP_CT_IS_REPLY; - } - - /* Update skb to refer to this connection */ - skb->nfct = &tuplehash_to_ctrack(h)->ct_general; - skb->nfctinfo = *ctinfo; - return -NF_ACCEPT; -} - -/* Small and modified version of icmp_rcv */ -static int -icmp_error(struct sk_buff *skb, enum ip_conntrack_info *ctinfo, - unsigned int hooknum) -{ - struct icmphdr _ih, *icmph; - - /* Not enough header? */ - icmph = skb_header_pointer(skb, skb->nh.iph->ihl*4, sizeof(_ih), &_ih); - if (icmph == NULL) { - if (LOG_INVALID(IPPROTO_ICMP)) - nf_log_packet(PF_INET, 0, skb, NULL, NULL, NULL, - "ip_ct_icmp: short packet "); - return -NF_ACCEPT; - } - - /* See ip_conntrack_proto_tcp.c */ - if (ip_conntrack_checksum && hooknum == NF_IP_PRE_ROUTING && - nf_ip_checksum(skb, hooknum, skb->nh.iph->ihl * 4, 0)) { - if (LOG_INVALID(IPPROTO_ICMP)) - nf_log_packet(PF_INET, 0, skb, NULL, NULL, NULL, - "ip_ct_icmp: bad ICMP checksum "); - return -NF_ACCEPT; - } - - /* - * 18 is the highest 'known' ICMP type. Anything else is a mystery - * - * RFC 1122: 3.2.2 Unknown ICMP messages types MUST be silently - * discarded. - */ - if (icmph->type > NR_ICMP_TYPES) { - if (LOG_INVALID(IPPROTO_ICMP)) - nf_log_packet(PF_INET, 0, skb, NULL, NULL, NULL, - "ip_ct_icmp: invalid ICMP type "); - return -NF_ACCEPT; - } - - /* Need to track icmp error message? */ - if (icmph->type != ICMP_DEST_UNREACH - && icmph->type != ICMP_SOURCE_QUENCH - && icmph->type != ICMP_TIME_EXCEEDED - && icmph->type != ICMP_PARAMETERPROB - && icmph->type != ICMP_REDIRECT) - return NF_ACCEPT; - - return icmp_error_message(skb, ctinfo, hooknum); -} - -#if defined(CONFIG_IP_NF_CONNTRACK_NETLINK) || \ - defined(CONFIG_IP_NF_CONNTRACK_NETLINK_MODULE) -static int icmp_tuple_to_nfattr(struct sk_buff *skb, - const struct ip_conntrack_tuple *t) -{ - NFA_PUT(skb, CTA_PROTO_ICMP_ID, sizeof(__be16), - &t->src.u.icmp.id); - NFA_PUT(skb, CTA_PROTO_ICMP_TYPE, sizeof(u_int8_t), - &t->dst.u.icmp.type); - NFA_PUT(skb, CTA_PROTO_ICMP_CODE, sizeof(u_int8_t), - &t->dst.u.icmp.code); - - return 0; - -nfattr_failure: - return -1; -} - -static int icmp_nfattr_to_tuple(struct nfattr *tb[], - struct ip_conntrack_tuple *tuple) -{ - if (!tb[CTA_PROTO_ICMP_TYPE-1] - || !tb[CTA_PROTO_ICMP_CODE-1] - || !tb[CTA_PROTO_ICMP_ID-1]) - return -EINVAL; - - tuple->dst.u.icmp.type = - *(u_int8_t *)NFA_DATA(tb[CTA_PROTO_ICMP_TYPE-1]); - tuple->dst.u.icmp.code = - *(u_int8_t *)NFA_DATA(tb[CTA_PROTO_ICMP_CODE-1]); - tuple->src.u.icmp.id = - *(__be16 *)NFA_DATA(tb[CTA_PROTO_ICMP_ID-1]); - - if (tuple->dst.u.icmp.type >= sizeof(invmap) - || !invmap[tuple->dst.u.icmp.type]) - return -EINVAL; - - return 0; -} -#endif - -struct ip_conntrack_protocol ip_conntrack_protocol_icmp = -{ - .proto = IPPROTO_ICMP, - .name = "icmp", - .pkt_to_tuple = icmp_pkt_to_tuple, - .invert_tuple = icmp_invert_tuple, - .print_tuple = icmp_print_tuple, - .print_conntrack = icmp_print_conntrack, - .packet = icmp_packet, - .new = icmp_new, - .error = icmp_error, -#if defined(CONFIG_IP_NF_CONNTRACK_NETLINK) || \ - defined(CONFIG_IP_NF_CONNTRACK_NETLINK_MODULE) - .tuple_to_nfattr = icmp_tuple_to_nfattr, - .nfattr_to_tuple = icmp_nfattr_to_tuple, -#endif -}; diff -puN net/ipv4/netfilter/ip_conntrack_proto_sctp.c~git-net /dev/null --- a/net/ipv4/netfilter/ip_conntrack_proto_sctp.c +++ /dev/null @@ -1,659 +0,0 @@ -/* - * Connection tracking protocol helper module for SCTP. - * - * SCTP is defined in RFC 2960. References to various sections in this code - * are to this RFC. - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - */ - -/* - * Added support for proc manipulation of timeouts. - */ - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#include -#include - -#if 0 -#define DEBUGP(format, ...) printk(format, ## __VA_ARGS__) -#else -#define DEBUGP(format, args...) -#endif - -/* Protects conntrack->proto.sctp */ -static DEFINE_RWLOCK(sctp_lock); - -/* FIXME: Examine ipfilter's timeouts and conntrack transitions more - closely. They're more complex. --RR - - And so for me for SCTP :D -Kiran */ - -static const char *sctp_conntrack_names[] = { - "NONE", - "CLOSED", - "COOKIE_WAIT", - "COOKIE_ECHOED", - "ESTABLISHED", - "SHUTDOWN_SENT", - "SHUTDOWN_RECD", - "SHUTDOWN_ACK_SENT", -}; - -#define SECS * HZ -#define MINS * 60 SECS -#define HOURS * 60 MINS -#define DAYS * 24 HOURS - -static unsigned int ip_ct_sctp_timeout_closed __read_mostly = 10 SECS; -static unsigned int ip_ct_sctp_timeout_cookie_wait __read_mostly = 3 SECS; -static unsigned int ip_ct_sctp_timeout_cookie_echoed __read_mostly = 3 SECS; -static unsigned int ip_ct_sctp_timeout_established __read_mostly = 5 DAYS; -static unsigned int ip_ct_sctp_timeout_shutdown_sent __read_mostly = 300 SECS / 1000; -static unsigned int ip_ct_sctp_timeout_shutdown_recd __read_mostly = 300 SECS / 1000; -static unsigned int ip_ct_sctp_timeout_shutdown_ack_sent __read_mostly = 3 SECS; - -static const unsigned int * sctp_timeouts[] -= { NULL, /* SCTP_CONNTRACK_NONE */ - &ip_ct_sctp_timeout_closed, /* SCTP_CONNTRACK_CLOSED */ - &ip_ct_sctp_timeout_cookie_wait, /* SCTP_CONNTRACK_COOKIE_WAIT */ - &ip_ct_sctp_timeout_cookie_echoed, /* SCTP_CONNTRACK_COOKIE_ECHOED */ - &ip_ct_sctp_timeout_established, /* SCTP_CONNTRACK_ESTABLISHED */ - &ip_ct_sctp_timeout_shutdown_sent, /* SCTP_CONNTRACK_SHUTDOWN_SENT */ - &ip_ct_sctp_timeout_shutdown_recd, /* SCTP_CONNTRACK_SHUTDOWN_RECD */ - &ip_ct_sctp_timeout_shutdown_ack_sent /* SCTP_CONNTRACK_SHUTDOWN_ACK_SENT */ - }; - -#define sNO SCTP_CONNTRACK_NONE -#define sCL SCTP_CONNTRACK_CLOSED -#define sCW SCTP_CONNTRACK_COOKIE_WAIT -#define sCE SCTP_CONNTRACK_COOKIE_ECHOED -#define sES SCTP_CONNTRACK_ESTABLISHED -#define sSS SCTP_CONNTRACK_SHUTDOWN_SENT -#define sSR SCTP_CONNTRACK_SHUTDOWN_RECD -#define sSA SCTP_CONNTRACK_SHUTDOWN_ACK_SENT -#define sIV SCTP_CONNTRACK_MAX - -/* - These are the descriptions of the states: - -NOTE: These state names are tantalizingly similar to the states of an -SCTP endpoint. But the interpretation of the states is a little different, -considering that these are the states of the connection and not of an end -point. Please note the subtleties. -Kiran - -NONE - Nothing so far. -COOKIE WAIT - We have seen an INIT chunk in the original direction, or also - an INIT_ACK chunk in the reply direction. -COOKIE ECHOED - We have seen a COOKIE_ECHO chunk in the original direction. -ESTABLISHED - We have seen a COOKIE_ACK in the reply direction. -SHUTDOWN_SENT - We have seen a SHUTDOWN chunk in the original direction. -SHUTDOWN_RECD - We have seen a SHUTDOWN chunk in the reply directoin. -SHUTDOWN_ACK_SENT - We have seen a SHUTDOWN_ACK chunk in the direction opposite - to that of the SHUTDOWN chunk. -CLOSED - We have seen a SHUTDOWN_COMPLETE chunk in the direction of - the SHUTDOWN chunk. Connection is closed. -*/ - -/* TODO - - I have assumed that the first INIT is in the original direction. - This messes things when an INIT comes in the reply direction in CLOSED - state. - - Check the error type in the reply dir before transitioning from -cookie echoed to closed. - - Sec 5.2.4 of RFC 2960 - - Multi Homing support. -*/ - -/* SCTP conntrack state transitions */ -static const enum sctp_conntrack sctp_conntracks[2][9][SCTP_CONNTRACK_MAX] = { - { -/* ORIGINAL */ -/* sNO, sCL, sCW, sCE, sES, sSS, sSR, sSA */ -/* init */ {sCW, sCW, sCW, sCE, sES, sSS, sSR, sSA}, -/* init_ack */ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA}, -/* abort */ {sCL, sCL, sCL, sCL, sCL, sCL, sCL, sCL}, -/* shutdown */ {sCL, sCL, sCW, sCE, sSS, sSS, sSR, sSA}, -/* shutdown_ack */ {sSA, sCL, sCW, sCE, sES, sSA, sSA, sSA}, -/* error */ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA},/* Cant have Stale cookie*/ -/* cookie_echo */ {sCL, sCL, sCE, sCE, sES, sSS, sSR, sSA},/* 5.2.4 - Big TODO */ -/* cookie_ack */ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA},/* Cant come in orig dir */ -/* shutdown_comp*/ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sCL} - }, - { -/* REPLY */ -/* sNO, sCL, sCW, sCE, sES, sSS, sSR, sSA */ -/* init */ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sSA},/* INIT in sCL Big TODO */ -/* init_ack */ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sSA}, -/* abort */ {sIV, sCL, sCL, sCL, sCL, sCL, sCL, sCL}, -/* shutdown */ {sIV, sCL, sCW, sCE, sSR, sSS, sSR, sSA}, -/* shutdown_ack */ {sIV, sCL, sCW, sCE, sES, sSA, sSA, sSA}, -/* error */ {sIV, sCL, sCW, sCL, sES, sSS, sSR, sSA}, -/* cookie_echo */ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sSA},/* Cant come in reply dir */ -/* cookie_ack */ {sIV, sCL, sCW, sES, sES, sSS, sSR, sSA}, -/* shutdown_comp*/ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sCL} - } -}; - -static int sctp_pkt_to_tuple(const struct sk_buff *skb, - unsigned int dataoff, - struct ip_conntrack_tuple *tuple) -{ - sctp_sctphdr_t _hdr, *hp; - - DEBUGP(__FUNCTION__); - DEBUGP("\n"); - - /* Actually only need first 8 bytes. */ - hp = skb_header_pointer(skb, dataoff, 8, &_hdr); - if (hp == NULL) - return 0; - - tuple->src.u.sctp.port = hp->source; - tuple->dst.u.sctp.port = hp->dest; - return 1; -} - -static int sctp_invert_tuple(struct ip_conntrack_tuple *tuple, - const struct ip_conntrack_tuple *orig) -{ - DEBUGP(__FUNCTION__); - DEBUGP("\n"); - - tuple->src.u.sctp.port = orig->dst.u.sctp.port; - tuple->dst.u.sctp.port = orig->src.u.sctp.port; - return 1; -} - -/* Print out the per-protocol part of the tuple. */ -static int sctp_print_tuple(struct seq_file *s, - const struct ip_conntrack_tuple *tuple) -{ - DEBUGP(__FUNCTION__); - DEBUGP("\n"); - - return seq_printf(s, "sport=%hu dport=%hu ", - ntohs(tuple->src.u.sctp.port), - ntohs(tuple->dst.u.sctp.port)); -} - -/* Print out the private part of the conntrack. */ -static int sctp_print_conntrack(struct seq_file *s, - const struct ip_conntrack *conntrack) -{ - enum sctp_conntrack state; - - DEBUGP(__FUNCTION__); - DEBUGP("\n"); - - read_lock_bh(&sctp_lock); - state = conntrack->proto.sctp.state; - read_unlock_bh(&sctp_lock); - - return seq_printf(s, "%s ", sctp_conntrack_names[state]); -} - -#define for_each_sctp_chunk(skb, sch, _sch, offset, count) \ -for (offset = skb->nh.iph->ihl * 4 + sizeof(sctp_sctphdr_t), count = 0; \ - offset < skb->len && \ - (sch = skb_header_pointer(skb, offset, sizeof(_sch), &_sch)); \ - offset += (ntohs(sch->length) + 3) & ~3, count++) - -/* Some validity checks to make sure the chunks are fine */ -static int do_basic_checks(struct ip_conntrack *conntrack, - const struct sk_buff *skb, - char *map) -{ - u_int32_t offset, count; - sctp_chunkhdr_t _sch, *sch; - int flag; - - DEBUGP(__FUNCTION__); - DEBUGP("\n"); - - flag = 0; - - for_each_sctp_chunk (skb, sch, _sch, offset, count) { - DEBUGP("Chunk Num: %d Type: %d\n", count, sch->type); - - if (sch->type == SCTP_CID_INIT - || sch->type == SCTP_CID_INIT_ACK - || sch->type == SCTP_CID_SHUTDOWN_COMPLETE) { - flag = 1; - } - - /* - * Cookie Ack/Echo chunks not the first OR - * Init / Init Ack / Shutdown compl chunks not the only chunks - * OR zero-length. - */ - if (((sch->type == SCTP_CID_COOKIE_ACK - || sch->type == SCTP_CID_COOKIE_ECHO - || flag) - && count !=0) || !sch->length) { - DEBUGP("Basic checks failed\n"); - return 1; - } - - if (map) { - set_bit(sch->type, (void *)map); - } - } - - DEBUGP("Basic checks passed\n"); - return count == 0; -} - -static int new_state(enum ip_conntrack_dir dir, - enum sctp_conntrack cur_state, - int chunk_type) -{ - int i; - - DEBUGP(__FUNCTION__); - DEBUGP("\n"); - - DEBUGP("Chunk type: %d\n", chunk_type); - - switch (chunk_type) { - case SCTP_CID_INIT: - DEBUGP("SCTP_CID_INIT\n"); - i = 0; break; - case SCTP_CID_INIT_ACK: - DEBUGP("SCTP_CID_INIT_ACK\n"); - i = 1; break; - case SCTP_CID_ABORT: - DEBUGP("SCTP_CID_ABORT\n"); - i = 2; break; - case SCTP_CID_SHUTDOWN: - DEBUGP("SCTP_CID_SHUTDOWN\n"); - i = 3; break; - case SCTP_CID_SHUTDOWN_ACK: - DEBUGP("SCTP_CID_SHUTDOWN_ACK\n"); - i = 4; break; - case SCTP_CID_ERROR: - DEBUGP("SCTP_CID_ERROR\n"); - i = 5; break; - case SCTP_CID_COOKIE_ECHO: - DEBUGP("SCTP_CID_COOKIE_ECHO\n"); - i = 6; break; - case SCTP_CID_COOKIE_ACK: - DEBUGP("SCTP_CID_COOKIE_ACK\n"); - i = 7; break; - case SCTP_CID_SHUTDOWN_COMPLETE: - DEBUGP("SCTP_CID_SHUTDOWN_COMPLETE\n"); - i = 8; break; - default: - /* Other chunks like DATA, SACK, HEARTBEAT and - its ACK do not cause a change in state */ - DEBUGP("Unknown chunk type, Will stay in %s\n", - sctp_conntrack_names[cur_state]); - return cur_state; - } - - DEBUGP("dir: %d cur_state: %s chunk_type: %d new_state: %s\n", - dir, sctp_conntrack_names[cur_state], chunk_type, - sctp_conntrack_names[sctp_conntracks[dir][i][cur_state]]); - - return sctp_conntracks[dir][i][cur_state]; -} - -/* Returns verdict for packet, or -1 for invalid. */ -static int sctp_packet(struct ip_conntrack *conntrack, - const struct sk_buff *skb, - enum ip_conntrack_info ctinfo) -{ - enum sctp_conntrack newconntrack, oldsctpstate; - struct iphdr *iph = skb->nh.iph; - sctp_sctphdr_t _sctph, *sh; - sctp_chunkhdr_t _sch, *sch; - u_int32_t offset, count; - char map[256 / sizeof (char)] = {0}; - - DEBUGP(__FUNCTION__); - DEBUGP("\n"); - - sh = skb_header_pointer(skb, iph->ihl * 4, sizeof(_sctph), &_sctph); - if (sh == NULL) - return -1; - - if (do_basic_checks(conntrack, skb, map) != 0) - return -1; - - /* Check the verification tag (Sec 8.5) */ - if (!test_bit(SCTP_CID_INIT, (void *)map) - && !test_bit(SCTP_CID_SHUTDOWN_COMPLETE, (void *)map) - && !test_bit(SCTP_CID_COOKIE_ECHO, (void *)map) - && !test_bit(SCTP_CID_ABORT, (void *)map) - && !test_bit(SCTP_CID_SHUTDOWN_ACK, (void *)map) - && (sh->vtag != conntrack->proto.sctp.vtag[CTINFO2DIR(ctinfo)])) { - DEBUGP("Verification tag check failed\n"); - return -1; - } - - oldsctpstate = newconntrack = SCTP_CONNTRACK_MAX; - for_each_sctp_chunk (skb, sch, _sch, offset, count) { - write_lock_bh(&sctp_lock); - - /* Special cases of Verification tag check (Sec 8.5.1) */ - if (sch->type == SCTP_CID_INIT) { - /* Sec 8.5.1 (A) */ - if (sh->vtag != 0) { - write_unlock_bh(&sctp_lock); - return -1; - } - } else if (sch->type == SCTP_CID_ABORT) { - /* Sec 8.5.1 (B) */ - if (!(sh->vtag == conntrack->proto.sctp.vtag[CTINFO2DIR(ctinfo)]) - && !(sh->vtag == conntrack->proto.sctp.vtag - [1 - CTINFO2DIR(ctinfo)])) { - write_unlock_bh(&sctp_lock); - return -1; - } - } else if (sch->type == SCTP_CID_SHUTDOWN_COMPLETE) { - /* Sec 8.5.1 (C) */ - if (!(sh->vtag == conntrack->proto.sctp.vtag[CTINFO2DIR(ctinfo)]) - && !(sh->vtag == conntrack->proto.sctp.vtag - [1 - CTINFO2DIR(ctinfo)] - && (sch->flags & 1))) { - write_unlock_bh(&sctp_lock); - return -1; - } - } else if (sch->type == SCTP_CID_COOKIE_ECHO) { - /* Sec 8.5.1 (D) */ - if (!(sh->vtag == conntrack->proto.sctp.vtag[CTINFO2DIR(ctinfo)])) { - write_unlock_bh(&sctp_lock); - return -1; - } - } - - oldsctpstate = conntrack->proto.sctp.state; - newconntrack = new_state(CTINFO2DIR(ctinfo), oldsctpstate, sch->type); - - /* Invalid */ - if (newconntrack == SCTP_CONNTRACK_MAX) { - DEBUGP("ip_conntrack_sctp: Invalid dir=%i ctype=%u conntrack=%u\n", - CTINFO2DIR(ctinfo), sch->type, oldsctpstate); - write_unlock_bh(&sctp_lock); - return -1; - } - - /* If it is an INIT or an INIT ACK note down the vtag */ - if (sch->type == SCTP_CID_INIT - || sch->type == SCTP_CID_INIT_ACK) { - sctp_inithdr_t _inithdr, *ih; - - ih = skb_header_pointer(skb, offset + sizeof(sctp_chunkhdr_t), - sizeof(_inithdr), &_inithdr); - if (ih == NULL) { - write_unlock_bh(&sctp_lock); - return -1; - } - DEBUGP("Setting vtag %x for dir %d\n", - ih->init_tag, !CTINFO2DIR(ctinfo)); - conntrack->proto.sctp.vtag[!CTINFO2DIR(ctinfo)] = ih->init_tag; - } - - conntrack->proto.sctp.state = newconntrack; - if (oldsctpstate != newconntrack) - ip_conntrack_event_cache(IPCT_PROTOINFO, skb); - write_unlock_bh(&sctp_lock); - } - - ip_ct_refresh_acct(conntrack, ctinfo, skb, *sctp_timeouts[newconntrack]); - - if (oldsctpstate == SCTP_CONNTRACK_COOKIE_ECHOED - && CTINFO2DIR(ctinfo) == IP_CT_DIR_REPLY - && newconntrack == SCTP_CONNTRACK_ESTABLISHED) { - DEBUGP("Setting assured bit\n"); - set_bit(IPS_ASSURED_BIT, &conntrack->status); - ip_conntrack_event_cache(IPCT_STATUS, skb); - } - - return NF_ACCEPT; -} - -/* Called when a new connection for this protocol found. */ -static int sctp_new(struct ip_conntrack *conntrack, - const struct sk_buff *skb) -{ - enum sctp_conntrack newconntrack; - struct iphdr *iph = skb->nh.iph; - sctp_sctphdr_t _sctph, *sh; - sctp_chunkhdr_t _sch, *sch; - u_int32_t offset, count; - char map[256 / sizeof (char)] = {0}; - - DEBUGP(__FUNCTION__); - DEBUGP("\n"); - - sh = skb_header_pointer(skb, iph->ihl * 4, sizeof(_sctph), &_sctph); - if (sh == NULL) - return 0; - - if (do_basic_checks(conntrack, skb, map) != 0) - return 0; - - /* If an OOTB packet has any of these chunks discard (Sec 8.4) */ - if ((test_bit (SCTP_CID_ABORT, (void *)map)) - || (test_bit (SCTP_CID_SHUTDOWN_COMPLETE, (void *)map)) - || (test_bit (SCTP_CID_COOKIE_ACK, (void *)map))) { - return 0; - } - - newconntrack = SCTP_CONNTRACK_MAX; - for_each_sctp_chunk (skb, sch, _sch, offset, count) { - /* Don't need lock here: this conntrack not in circulation yet */ - newconntrack = new_state (IP_CT_DIR_ORIGINAL, - SCTP_CONNTRACK_NONE, sch->type); - - /* Invalid: delete conntrack */ - if (newconntrack == SCTP_CONNTRACK_MAX) { - DEBUGP("ip_conntrack_sctp: invalid new deleting.\n"); - return 0; - } - - /* Copy the vtag into the state info */ - if (sch->type == SCTP_CID_INIT) { - if (sh->vtag == 0) { - sctp_inithdr_t _inithdr, *ih; - - ih = skb_header_pointer(skb, offset + sizeof(sctp_chunkhdr_t), - sizeof(_inithdr), &_inithdr); - if (ih == NULL) - return 0; - - DEBUGP("Setting vtag %x for new conn\n", - ih->init_tag); - - conntrack->proto.sctp.vtag[IP_CT_DIR_REPLY] = - ih->init_tag; - } else { - /* Sec 8.5.1 (A) */ - return 0; - } - } - /* If it is a shutdown ack OOTB packet, we expect a return - shutdown complete, otherwise an ABORT Sec 8.4 (5) and (8) */ - else { - DEBUGP("Setting vtag %x for new conn OOTB\n", - sh->vtag); - conntrack->proto.sctp.vtag[IP_CT_DIR_REPLY] = sh->vtag; - } - - conntrack->proto.sctp.state = newconntrack; - } - - return 1; -} - -static struct ip_conntrack_protocol ip_conntrack_protocol_sctp = { - .proto = IPPROTO_SCTP, - .name = "sctp", - .pkt_to_tuple = sctp_pkt_to_tuple, - .invert_tuple = sctp_invert_tuple, - .print_tuple = sctp_print_tuple, - .print_conntrack = sctp_print_conntrack, - .packet = sctp_packet, - .new = sctp_new, - .destroy = NULL, - .me = THIS_MODULE, -#if defined(CONFIG_IP_NF_CONNTRACK_NETLINK) || \ - defined(CONFIG_IP_NF_CONNTRACK_NETLINK_MODULE) - .tuple_to_nfattr = ip_ct_port_tuple_to_nfattr, - .nfattr_to_tuple = ip_ct_port_nfattr_to_tuple, -#endif -}; - -#ifdef CONFIG_SYSCTL -static ctl_table ip_ct_sysctl_table[] = { - { - .ctl_name = NET_IPV4_NF_CONNTRACK_SCTP_TIMEOUT_CLOSED, - .procname = "ip_conntrack_sctp_timeout_closed", - .data = &ip_ct_sctp_timeout_closed, - .maxlen = sizeof(unsigned int), - .mode = 0644, - .proc_handler = &proc_dointvec_jiffies, - }, - { - .ctl_name = NET_IPV4_NF_CONNTRACK_SCTP_TIMEOUT_COOKIE_WAIT, - .procname = "ip_conntrack_sctp_timeout_cookie_wait", - .data = &ip_ct_sctp_timeout_cookie_wait, - .maxlen = sizeof(unsigned int), - .mode = 0644, - .proc_handler = &proc_dointvec_jiffies, - }, - { - .ctl_name = NET_IPV4_NF_CONNTRACK_SCTP_TIMEOUT_COOKIE_ECHOED, - .procname = "ip_conntrack_sctp_timeout_cookie_echoed", - .data = &ip_ct_sctp_timeout_cookie_echoed, - .maxlen = sizeof(unsigned int), - .mode = 0644, - .proc_handler = &proc_dointvec_jiffies, - }, - { - .ctl_name = NET_IPV4_NF_CONNTRACK_SCTP_TIMEOUT_ESTABLISHED, - .procname = "ip_conntrack_sctp_timeout_established", - .data = &ip_ct_sctp_timeout_established, - .maxlen = sizeof(unsigned int), - .mode = 0644, - .proc_handler = &proc_dointvec_jiffies, - }, - { - .ctl_name = NET_IPV4_NF_CONNTRACK_SCTP_TIMEOUT_SHUTDOWN_SENT, - .procname = "ip_conntrack_sctp_timeout_shutdown_sent", - .data = &ip_ct_sctp_timeout_shutdown_sent, - .maxlen = sizeof(unsigned int), - .mode = 0644, - .proc_handler = &proc_dointvec_jiffies, - }, - { - .ctl_name = NET_IPV4_NF_CONNTRACK_SCTP_TIMEOUT_SHUTDOWN_RECD, - .procname = "ip_conntrack_sctp_timeout_shutdown_recd", - .data = &ip_ct_sctp_timeout_shutdown_recd, - .maxlen = sizeof(unsigned int), - .mode = 0644, - .proc_handler = &proc_dointvec_jiffies, - }, - { - .ctl_name = NET_IPV4_NF_CONNTRACK_SCTP_TIMEOUT_SHUTDOWN_ACK_SENT, - .procname = "ip_conntrack_sctp_timeout_shutdown_ack_sent", - .data = &ip_ct_sctp_timeout_shutdown_ack_sent, - .maxlen = sizeof(unsigned int), - .mode = 0644, - .proc_handler = &proc_dointvec_jiffies, - }, - { .ctl_name = 0 } -}; - -static ctl_table ip_ct_netfilter_table[] = { - { - .ctl_name = NET_IPV4_NETFILTER, - .procname = "netfilter", - .mode = 0555, - .child = ip_ct_sysctl_table, - }, - { .ctl_name = 0 } -}; - -static ctl_table ip_ct_ipv4_table[] = { - { - .ctl_name = NET_IPV4, - .procname = "ipv4", - .mode = 0555, - .child = ip_ct_netfilter_table, - }, - { .ctl_name = 0 } -}; - -static ctl_table ip_ct_net_table[] = { - { - .ctl_name = CTL_NET, - .procname = "net", - .mode = 0555, - .child = ip_ct_ipv4_table, - }, - { .ctl_name = 0 } -}; - -static struct ctl_table_header *ip_ct_sysctl_header; -#endif - -static int __init ip_conntrack_proto_sctp_init(void) -{ - int ret; - - ret = ip_conntrack_protocol_register(&ip_conntrack_protocol_sctp); - if (ret) { - printk("ip_conntrack_proto_sctp: protocol register failed\n"); - goto out; - } - -#ifdef CONFIG_SYSCTL - ip_ct_sysctl_header = register_sysctl_table(ip_ct_net_table); - if (ip_ct_sysctl_header == NULL) { - ret = -ENOMEM; - printk("ip_conntrack_proto_sctp: can't register to sysctl.\n"); - goto cleanup; - } -#endif - - return ret; - -#ifdef CONFIG_SYSCTL - cleanup: - ip_conntrack_protocol_unregister(&ip_conntrack_protocol_sctp); -#endif - out: - DEBUGP("SCTP conntrack module loading %s\n", - ret ? "failed": "succeeded"); - return ret; -} - -static void __exit ip_conntrack_proto_sctp_fini(void) -{ - ip_conntrack_protocol_unregister(&ip_conntrack_protocol_sctp); -#ifdef CONFIG_SYSCTL - unregister_sysctl_table(ip_ct_sysctl_header); -#endif - DEBUGP("SCTP conntrack module unloaded\n"); -} - -module_init(ip_conntrack_proto_sctp_init); -module_exit(ip_conntrack_proto_sctp_fini); - -MODULE_LICENSE("GPL"); -MODULE_AUTHOR("Kiran Kumar Immidi"); -MODULE_DESCRIPTION("Netfilter connection tracking protocol helper for SCTP"); diff -puN net/ipv4/netfilter/ip_conntrack_proto_tcp.c~git-net /dev/null --- a/net/ipv4/netfilter/ip_conntrack_proto_tcp.c +++ /dev/null @@ -1,1164 +0,0 @@ -/* (C) 1999-2001 Paul `Rusty' Russell - * (C) 2002-2004 Netfilter Core Team - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - * - * Jozsef Kadlecsik : - * - Real stateful connection tracking - * - Modified state transitions table - * - Window scaling support added - * - SACK support added - * - * Willy Tarreau: - * - State table bugfixes - * - More robust state changes - * - Tuning timer parameters - * - * version 2.2 - */ - -#include -#include -#include -#include -#include -#include -#include -#include - -#include - -#include -#include -#include - -#if 0 -#define DEBUGP printk -#define DEBUGP_VARS -#else -#define DEBUGP(format, args...) -#endif - -/* Protects conntrack->proto.tcp */ -static DEFINE_RWLOCK(tcp_lock); - -/* "Be conservative in what you do, - be liberal in what you accept from others." - If it's non-zero, we mark only out of window RST segments as INVALID. */ -int ip_ct_tcp_be_liberal __read_mostly = 0; - -/* If it is set to zero, we disable picking up already established - connections. */ -int ip_ct_tcp_loose __read_mostly = 1; - -/* Max number of the retransmitted packets without receiving an (acceptable) - ACK from the destination. If this number is reached, a shorter timer - will be started. */ -int ip_ct_tcp_max_retrans __read_mostly = 3; - - /* FIXME: Examine ipfilter's timeouts and conntrack transitions more - closely. They're more complex. --RR */ - -static const char *tcp_conntrack_names[] = { - "NONE", - "SYN_SENT", - "SYN_RECV", - "ESTABLISHED", - "FIN_WAIT", - "CLOSE_WAIT", - "LAST_ACK", - "TIME_WAIT", - "CLOSE", - "LISTEN" -}; - -#define SECS * HZ -#define MINS * 60 SECS -#define HOURS * 60 MINS -#define DAYS * 24 HOURS - -unsigned int ip_ct_tcp_timeout_syn_sent __read_mostly = 2 MINS; -unsigned int ip_ct_tcp_timeout_syn_recv __read_mostly = 60 SECS; -unsigned int ip_ct_tcp_timeout_established __read_mostly = 5 DAYS; -unsigned int ip_ct_tcp_timeout_fin_wait __read_mostly = 2 MINS; -unsigned int ip_ct_tcp_timeout_close_wait __read_mostly = 60 SECS; -unsigned int ip_ct_tcp_timeout_last_ack __read_mostly = 30 SECS; -unsigned int ip_ct_tcp_timeout_time_wait __read_mostly = 2 MINS; -unsigned int ip_ct_tcp_timeout_close __read_mostly = 10 SECS; - -/* RFC1122 says the R2 limit should be at least 100 seconds. - Linux uses 15 packets as limit, which corresponds - to ~13-30min depending on RTO. */ -unsigned int ip_ct_tcp_timeout_max_retrans __read_mostly = 5 MINS; - -static const unsigned int * tcp_timeouts[] -= { NULL, /* TCP_CONNTRACK_NONE */ - &ip_ct_tcp_timeout_syn_sent, /* TCP_CONNTRACK_SYN_SENT, */ - &ip_ct_tcp_timeout_syn_recv, /* TCP_CONNTRACK_SYN_RECV, */ - &ip_ct_tcp_timeout_established, /* TCP_CONNTRACK_ESTABLISHED, */ - &ip_ct_tcp_timeout_fin_wait, /* TCP_CONNTRACK_FIN_WAIT, */ - &ip_ct_tcp_timeout_close_wait, /* TCP_CONNTRACK_CLOSE_WAIT, */ - &ip_ct_tcp_timeout_last_ack, /* TCP_CONNTRACK_LAST_ACK, */ - &ip_ct_tcp_timeout_time_wait, /* TCP_CONNTRACK_TIME_WAIT, */ - &ip_ct_tcp_timeout_close, /* TCP_CONNTRACK_CLOSE, */ - NULL, /* TCP_CONNTRACK_LISTEN */ - }; - -#define sNO TCP_CONNTRACK_NONE -#define sSS TCP_CONNTRACK_SYN_SENT -#define sSR TCP_CONNTRACK_SYN_RECV -#define sES TCP_CONNTRACK_ESTABLISHED -#define sFW TCP_CONNTRACK_FIN_WAIT -#define sCW TCP_CONNTRACK_CLOSE_WAIT -#define sLA TCP_CONNTRACK_LAST_ACK -#define sTW TCP_CONNTRACK_TIME_WAIT -#define sCL TCP_CONNTRACK_CLOSE -#define sLI TCP_CONNTRACK_LISTEN -#define sIV TCP_CONNTRACK_MAX -#define sIG TCP_CONNTRACK_IGNORE - -/* What TCP flags are set from RST/SYN/FIN/ACK. */ -enum tcp_bit_set { - TCP_SYN_SET, - TCP_SYNACK_SET, - TCP_FIN_SET, - TCP_ACK_SET, - TCP_RST_SET, - TCP_NONE_SET, -}; - -/* - * The TCP state transition table needs a few words... - * - * We are the man in the middle. All the packets go through us - * but might get lost in transit to the destination. - * It is assumed that the destinations can't receive segments - * we haven't seen. - * - * The checked segment is in window, but our windows are *not* - * equivalent with the ones of the sender/receiver. We always - * try to guess the state of the current sender. - * - * The meaning of the states are: - * - * NONE: initial state - * SYN_SENT: SYN-only packet seen - * SYN_RECV: SYN-ACK packet seen - * ESTABLISHED: ACK packet seen - * FIN_WAIT: FIN packet seen - * CLOSE_WAIT: ACK seen (after FIN) - * LAST_ACK: FIN seen (after FIN) - * TIME_WAIT: last ACK seen - * CLOSE: closed connection - * - * LISTEN state is not used. - * - * Packets marked as IGNORED (sIG): - * if they may be either invalid or valid - * and the receiver may send back a connection - * closing RST or a SYN/ACK. - * - * Packets marked as INVALID (sIV): - * if they are invalid - * or we do not support the request (simultaneous open) - */ -static const enum tcp_conntrack tcp_conntracks[2][6][TCP_CONNTRACK_MAX] = { - { -/* ORIGINAL */ -/* sNO, sSS, sSR, sES, sFW, sCW, sLA, sTW, sCL, sLI */ -/*syn*/ { sSS, sSS, sIG, sIG, sIG, sIG, sIG, sSS, sSS, sIV }, -/* - * sNO -> sSS Initialize a new connection - * sSS -> sSS Retransmitted SYN - * sSR -> sIG Late retransmitted SYN? - * sES -> sIG Error: SYNs in window outside the SYN_SENT state - * are errors. Receiver will reply with RST - * and close the connection. - * Or we are not in sync and hold a dead connection. - * sFW -> sIG - * sCW -> sIG - * sLA -> sIG - * sTW -> sSS Reopened connection (RFC 1122). - * sCL -> sSS - */ -/* sNO, sSS, sSR, sES, sFW, sCW, sLA, sTW, sCL, sLI */ -/*synack*/ { sIV, sIV, sIV, sIV, sIV, sIV, sIV, sIV, sIV, sIV }, -/* - * A SYN/ACK from the client is always invalid: - * - either it tries to set up a simultaneous open, which is - * not supported; - * - or the firewall has just been inserted between the two hosts - * during the session set-up. The SYN will be retransmitted - * by the true client (or it'll time out). - */ -/* sNO, sSS, sSR, sES, sFW, sCW, sLA, sTW, sCL, sLI */ -/*fin*/ { sIV, sIV, sFW, sFW, sLA, sLA, sLA, sTW, sCL, sIV }, -/* - * sNO -> sIV Too late and no reason to do anything... - * sSS -> sIV Client migth not send FIN in this state: - * we enforce waiting for a SYN/ACK reply first. - * sSR -> sFW Close started. - * sES -> sFW - * sFW -> sLA FIN seen in both directions, waiting for - * the last ACK. - * Migth be a retransmitted FIN as well... - * sCW -> sLA - * sLA -> sLA Retransmitted FIN. Remain in the same state. - * sTW -> sTW - * sCL -> sCL - */ -/* sNO, sSS, sSR, sES, sFW, sCW, sLA, sTW, sCL, sLI */ -/*ack*/ { sES, sIV, sES, sES, sCW, sCW, sTW, sTW, sCL, sIV }, -/* - * sNO -> sES Assumed. - * sSS -> sIV ACK is invalid: we haven't seen a SYN/ACK yet. - * sSR -> sES Established state is reached. - * sES -> sES :-) - * sFW -> sCW Normal close request answered by ACK. - * sCW -> sCW - * sLA -> sTW Last ACK detected. - * sTW -> sTW Retransmitted last ACK. Remain in the same state. - * sCL -> sCL - */ -/* sNO, sSS, sSR, sES, sFW, sCW, sLA, sTW, sCL, sLI */ -/*rst*/ { sIV, sCL, sCL, sCL, sCL, sCL, sCL, sCL, sCL, sIV }, -/*none*/ { sIV, sIV, sIV, sIV, sIV, sIV, sIV, sIV, sIV, sIV } - }, - { -/* REPLY */ -/* sNO, sSS, sSR, sES, sFW, sCW, sLA, sTW, sCL, sLI */ -/*syn*/ { sIV, sIV, sIV, sIV, sIV, sIV, sIV, sIV, sIV, sIV }, -/* - * sNO -> sIV Never reached. - * sSS -> sIV Simultaneous open, not supported - * sSR -> sIV Simultaneous open, not supported. - * sES -> sIV Server may not initiate a connection. - * sFW -> sIV - * sCW -> sIV - * sLA -> sIV - * sTW -> sIV Reopened connection, but server may not do it. - * sCL -> sIV - */ -/* sNO, sSS, sSR, sES, sFW, sCW, sLA, sTW, sCL, sLI */ -/*synack*/ { sIV, sSR, sSR, sIG, sIG, sIG, sIG, sIG, sIG, sIV }, -/* - * sSS -> sSR Standard open. - * sSR -> sSR Retransmitted SYN/ACK. - * sES -> sIG Late retransmitted SYN/ACK? - * sFW -> sIG Might be SYN/ACK answering ignored SYN - * sCW -> sIG - * sLA -> sIG - * sTW -> sIG - * sCL -> sIG - */ -/* sNO, sSS, sSR, sES, sFW, sCW, sLA, sTW, sCL, sLI */ -/*fin*/ { sIV, sIV, sFW, sFW, sLA, sLA, sLA, sTW, sCL, sIV }, -/* - * sSS -> sIV Server might not send FIN in this state. - * sSR -> sFW Close started. - * sES -> sFW - * sFW -> sLA FIN seen in both directions. - * sCW -> sLA - * sLA -> sLA Retransmitted FIN. - * sTW -> sTW - * sCL -> sCL - */ -/* sNO, sSS, sSR, sES, sFW, sCW, sLA, sTW, sCL, sLI */ -/*ack*/ { sIV, sIG, sSR, sES, sCW, sCW, sTW, sTW, sCL, sIV }, -/* - * sSS -> sIG Might be a half-open connection. - * sSR -> sSR Might answer late resent SYN. - * sES -> sES :-) - * sFW -> sCW Normal close request answered by ACK. - * sCW -> sCW - * sLA -> sTW Last ACK detected. - * sTW -> sTW Retransmitted last ACK. - * sCL -> sCL - */ -/* sNO, sSS, sSR, sES, sFW, sCW, sLA, sTW, sCL, sLI */ -/*rst*/ { sIV, sCL, sCL, sCL, sCL, sCL, sCL, sCL, sCL, sIV }, -/*none*/ { sIV, sIV, sIV, sIV, sIV, sIV, sIV, sIV, sIV, sIV } - } -}; - -static int tcp_pkt_to_tuple(const struct sk_buff *skb, - unsigned int dataoff, - struct ip_conntrack_tuple *tuple) -{ - struct tcphdr _hdr, *hp; - - /* Actually only need first 8 bytes. */ - hp = skb_header_pointer(skb, dataoff, 8, &_hdr); - if (hp == NULL) - return 0; - - tuple->src.u.tcp.port = hp->source; - tuple->dst.u.tcp.port = hp->dest; - - return 1; -} - -static int tcp_invert_tuple(struct ip_conntrack_tuple *tuple, - const struct ip_conntrack_tuple *orig) -{ - tuple->src.u.tcp.port = orig->dst.u.tcp.port; - tuple->dst.u.tcp.port = orig->src.u.tcp.port; - return 1; -} - -/* Print out the per-protocol part of the tuple. */ -static int tcp_print_tuple(struct seq_file *s, - const struct ip_conntrack_tuple *tuple) -{ - return seq_printf(s, "sport=%hu dport=%hu ", - ntohs(tuple->src.u.tcp.port), - ntohs(tuple->dst.u.tcp.port)); -} - -/* Print out the private part of the conntrack. */ -static int tcp_print_conntrack(struct seq_file *s, - const struct ip_conntrack *conntrack) -{ - enum tcp_conntrack state; - - read_lock_bh(&tcp_lock); - state = conntrack->proto.tcp.state; - read_unlock_bh(&tcp_lock); - - return seq_printf(s, "%s ", tcp_conntrack_names[state]); -} - -#if defined(CONFIG_IP_NF_CONNTRACK_NETLINK) || \ - defined(CONFIG_IP_NF_CONNTRACK_NETLINK_MODULE) -static int tcp_to_nfattr(struct sk_buff *skb, struct nfattr *nfa, - const struct ip_conntrack *ct) -{ - struct nfattr *nest_parms; - - read_lock_bh(&tcp_lock); - nest_parms = NFA_NEST(skb, CTA_PROTOINFO_TCP); - NFA_PUT(skb, CTA_PROTOINFO_TCP_STATE, sizeof(u_int8_t), - &ct->proto.tcp.state); - read_unlock_bh(&tcp_lock); - - NFA_NEST_END(skb, nest_parms); - - return 0; - -nfattr_failure: - read_unlock_bh(&tcp_lock); - return -1; -} - -static const size_t cta_min_tcp[CTA_PROTOINFO_TCP_MAX] = { - [CTA_PROTOINFO_TCP_STATE-1] = sizeof(u_int8_t), -}; - -static int nfattr_to_tcp(struct nfattr *cda[], struct ip_conntrack *ct) -{ - struct nfattr *attr = cda[CTA_PROTOINFO_TCP-1]; - struct nfattr *tb[CTA_PROTOINFO_TCP_MAX]; - - /* updates could not contain anything about the private - * protocol info, in that case skip the parsing */ - if (!attr) - return 0; - - nfattr_parse_nested(tb, CTA_PROTOINFO_TCP_MAX, attr); - - if (nfattr_bad_size(tb, CTA_PROTOINFO_TCP_MAX, cta_min_tcp)) - return -EINVAL; - - if (!tb[CTA_PROTOINFO_TCP_STATE-1]) - return -EINVAL; - - write_lock_bh(&tcp_lock); - ct->proto.tcp.state = - *(u_int8_t *)NFA_DATA(tb[CTA_PROTOINFO_TCP_STATE-1]); - write_unlock_bh(&tcp_lock); - - return 0; -} -#endif - -static unsigned int get_conntrack_index(const struct tcphdr *tcph) -{ - if (tcph->rst) return TCP_RST_SET; - else if (tcph->syn) return (tcph->ack ? TCP_SYNACK_SET : TCP_SYN_SET); - else if (tcph->fin) return TCP_FIN_SET; - else if (tcph->ack) return TCP_ACK_SET; - else return TCP_NONE_SET; -} - -/* TCP connection tracking based on 'Real Stateful TCP Packet Filtering - in IP Filter' by Guido van Rooij. - - http://www.nluug.nl/events/sane2000/papers.html - http://www.iae.nl/users/guido/papers/tcp_filtering.ps.gz - - The boundaries and the conditions are changed according to RFC793: - the packet must intersect the window (i.e. segments may be - after the right or before the left edge) and thus receivers may ACK - segments after the right edge of the window. - - td_maxend = max(sack + max(win,1)) seen in reply packets - td_maxwin = max(max(win, 1)) + (sack - ack) seen in sent packets - td_maxwin += seq + len - sender.td_maxend - if seq + len > sender.td_maxend - td_end = max(seq + len) seen in sent packets - - I. Upper bound for valid data: seq <= sender.td_maxend - II. Lower bound for valid data: seq + len >= sender.td_end - receiver.td_maxwin - III. Upper bound for valid ack: sack <= receiver.td_end - IV. Lower bound for valid ack: ack >= receiver.td_end - MAXACKWINDOW - - where sack is the highest right edge of sack block found in the packet. - - The upper bound limit for a valid ack is not ignored - - we doesn't have to deal with fragments. -*/ - -static inline __u32 segment_seq_plus_len(__u32 seq, - size_t len, - struct iphdr *iph, - struct tcphdr *tcph) -{ - return (seq + len - (iph->ihl + tcph->doff)*4 - + (tcph->syn ? 1 : 0) + (tcph->fin ? 1 : 0)); -} - -/* Fixme: what about big packets? */ -#define MAXACKWINCONST 66000 -#define MAXACKWINDOW(sender) \ - ((sender)->td_maxwin > MAXACKWINCONST ? (sender)->td_maxwin \ - : MAXACKWINCONST) - -/* - * Simplified tcp_parse_options routine from tcp_input.c - */ -static void tcp_options(const struct sk_buff *skb, - struct iphdr *iph, - struct tcphdr *tcph, - struct ip_ct_tcp_state *state) -{ - unsigned char buff[(15 * 4) - sizeof(struct tcphdr)]; - unsigned char *ptr; - int length = (tcph->doff*4) - sizeof(struct tcphdr); - - if (!length) - return; - - ptr = skb_header_pointer(skb, - (iph->ihl * 4) + sizeof(struct tcphdr), - length, buff); - BUG_ON(ptr == NULL); - - state->td_scale = - state->flags = 0; - - while (length > 0) { - int opcode=*ptr++; - int opsize; - - switch (opcode) { - case TCPOPT_EOL: - return; - case TCPOPT_NOP: /* Ref: RFC 793 section 3.1 */ - length--; - continue; - default: - opsize=*ptr++; - if (opsize < 2) /* "silly options" */ - return; - if (opsize > length) - break; /* don't parse partial options */ - - if (opcode == TCPOPT_SACK_PERM - && opsize == TCPOLEN_SACK_PERM) - state->flags |= IP_CT_TCP_FLAG_SACK_PERM; - else if (opcode == TCPOPT_WINDOW - && opsize == TCPOLEN_WINDOW) { - state->td_scale = *(u_int8_t *)ptr; - - if (state->td_scale > 14) { - /* See RFC1323 */ - state->td_scale = 14; - } - state->flags |= - IP_CT_TCP_FLAG_WINDOW_SCALE; - } - ptr += opsize - 2; - length -= opsize; - } - } -} - -static void tcp_sack(const struct sk_buff *skb, - struct iphdr *iph, - struct tcphdr *tcph, - __u32 *sack) -{ - unsigned char buff[(15 * 4) - sizeof(struct tcphdr)]; - unsigned char *ptr; - int length = (tcph->doff*4) - sizeof(struct tcphdr); - __u32 tmp; - - if (!length) - return; - - ptr = skb_header_pointer(skb, - (iph->ihl * 4) + sizeof(struct tcphdr), - length, buff); - BUG_ON(ptr == NULL); - - /* Fast path for timestamp-only option */ - if (length == TCPOLEN_TSTAMP_ALIGNED*4 - && *(__be32 *)ptr == - __constant_htonl((TCPOPT_NOP << 24) - | (TCPOPT_NOP << 16) - | (TCPOPT_TIMESTAMP << 8) - | TCPOLEN_TIMESTAMP)) - return; - - while (length > 0) { - int opcode=*ptr++; - int opsize, i; - - switch (opcode) { - case TCPOPT_EOL: - return; - case TCPOPT_NOP: /* Ref: RFC 793 section 3.1 */ - length--; - continue; - default: - opsize=*ptr++; - if (opsize < 2) /* "silly options" */ - return; - if (opsize > length) - break; /* don't parse partial options */ - - if (opcode == TCPOPT_SACK - && opsize >= (TCPOLEN_SACK_BASE - + TCPOLEN_SACK_PERBLOCK) - && !((opsize - TCPOLEN_SACK_BASE) - % TCPOLEN_SACK_PERBLOCK)) { - for (i = 0; - i < (opsize - TCPOLEN_SACK_BASE); - i += TCPOLEN_SACK_PERBLOCK) { - tmp = ntohl(*((__be32 *)(ptr+i)+1)); - - if (after(tmp, *sack)) - *sack = tmp; - } - return; - } - ptr += opsize - 2; - length -= opsize; - } - } -} - -static int tcp_in_window(struct ip_ct_tcp *state, - enum ip_conntrack_dir dir, - unsigned int index, - const struct sk_buff *skb, - struct iphdr *iph, - struct tcphdr *tcph) -{ - struct ip_ct_tcp_state *sender = &state->seen[dir]; - struct ip_ct_tcp_state *receiver = &state->seen[!dir]; - __u32 seq, ack, sack, end, win, swin; - int res; - - /* - * Get the required data from the packet. - */ - seq = ntohl(tcph->seq); - ack = sack = ntohl(tcph->ack_seq); - win = ntohs(tcph->window); - end = segment_seq_plus_len(seq, skb->len, iph, tcph); - - if (receiver->flags & IP_CT_TCP_FLAG_SACK_PERM) - tcp_sack(skb, iph, tcph, &sack); - - DEBUGP("tcp_in_window: START\n"); - DEBUGP("tcp_in_window: src=%u.%u.%u.%u:%hu dst=%u.%u.%u.%u:%hu " - "seq=%u ack=%u sack=%u win=%u end=%u\n", - NIPQUAD(iph->saddr), ntohs(tcph->source), - NIPQUAD(iph->daddr), ntohs(tcph->dest), - seq, ack, sack, win, end); - DEBUGP("tcp_in_window: sender end=%u maxend=%u maxwin=%u scale=%i " - "receiver end=%u maxend=%u maxwin=%u scale=%i\n", - sender->td_end, sender->td_maxend, sender->td_maxwin, - sender->td_scale, - receiver->td_end, receiver->td_maxend, receiver->td_maxwin, - receiver->td_scale); - - if (sender->td_end == 0) { - /* - * Initialize sender data. - */ - if (tcph->syn && tcph->ack) { - /* - * Outgoing SYN-ACK in reply to a SYN. - */ - sender->td_end = - sender->td_maxend = end; - sender->td_maxwin = (win == 0 ? 1 : win); - - tcp_options(skb, iph, tcph, sender); - /* - * RFC 1323: - * Both sides must send the Window Scale option - * to enable window scaling in either direction. - */ - if (!(sender->flags & IP_CT_TCP_FLAG_WINDOW_SCALE - && receiver->flags & IP_CT_TCP_FLAG_WINDOW_SCALE)) - sender->td_scale = - receiver->td_scale = 0; - } else { - /* - * We are in the middle of a connection, - * its history is lost for us. - * Let's try to use the data from the packet. - */ - sender->td_end = end; - sender->td_maxwin = (win == 0 ? 1 : win); - sender->td_maxend = end + sender->td_maxwin; - } - } else if (((state->state == TCP_CONNTRACK_SYN_SENT - && dir == IP_CT_DIR_ORIGINAL) - || (state->state == TCP_CONNTRACK_SYN_RECV - && dir == IP_CT_DIR_REPLY)) - && after(end, sender->td_end)) { - /* - * RFC 793: "if a TCP is reinitialized ... then it need - * not wait at all; it must only be sure to use sequence - * numbers larger than those recently used." - */ - sender->td_end = - sender->td_maxend = end; - sender->td_maxwin = (win == 0 ? 1 : win); - - tcp_options(skb, iph, tcph, sender); - } - - if (!(tcph->ack)) { - /* - * If there is no ACK, just pretend it was set and OK. - */ - ack = sack = receiver->td_end; - } else if (((tcp_flag_word(tcph) & (TCP_FLAG_ACK|TCP_FLAG_RST)) == - (TCP_FLAG_ACK|TCP_FLAG_RST)) - && (ack == 0)) { - /* - * Broken TCP stacks, that set ACK in RST packets as well - * with zero ack value. - */ - ack = sack = receiver->td_end; - } - - if (seq == end - && (!tcph->rst - || (seq == 0 && state->state == TCP_CONNTRACK_SYN_SENT))) - /* - * Packets contains no data: we assume it is valid - * and check the ack value only. - * However RST segments are always validated by their - * SEQ number, except when seq == 0 (reset sent answering - * SYN. - */ - seq = end = sender->td_end; - - DEBUGP("tcp_in_window: src=%u.%u.%u.%u:%hu dst=%u.%u.%u.%u:%hu " - "seq=%u ack=%u sack =%u win=%u end=%u\n", - NIPQUAD(iph->saddr), ntohs(tcph->source), - NIPQUAD(iph->daddr), ntohs(tcph->dest), - seq, ack, sack, win, end); - DEBUGP("tcp_in_window: sender end=%u maxend=%u maxwin=%u scale=%i " - "receiver end=%u maxend=%u maxwin=%u scale=%i\n", - sender->td_end, sender->td_maxend, sender->td_maxwin, - sender->td_scale, - receiver->td_end, receiver->td_maxend, receiver->td_maxwin, - receiver->td_scale); - - DEBUGP("tcp_in_window: I=%i II=%i III=%i IV=%i\n", - before(seq, sender->td_maxend + 1), - after(end, sender->td_end - receiver->td_maxwin - 1), - before(sack, receiver->td_end + 1), - after(ack, receiver->td_end - MAXACKWINDOW(sender))); - - if (before(seq, sender->td_maxend + 1) && - after(end, sender->td_end - receiver->td_maxwin - 1) && - before(sack, receiver->td_end + 1) && - after(ack, receiver->td_end - MAXACKWINDOW(sender))) { - /* - * Take into account window scaling (RFC 1323). - */ - if (!tcph->syn) - win <<= sender->td_scale; - - /* - * Update sender data. - */ - swin = win + (sack - ack); - if (sender->td_maxwin < swin) - sender->td_maxwin = swin; - if (after(end, sender->td_end)) - sender->td_end = end; - /* - * Update receiver data. - */ - if (after(end, sender->td_maxend)) - receiver->td_maxwin += end - sender->td_maxend; - if (after(sack + win, receiver->td_maxend - 1)) { - receiver->td_maxend = sack + win; - if (win == 0) - receiver->td_maxend++; - } - - /* - * Check retransmissions. - */ - if (index == TCP_ACK_SET) { - if (state->last_dir == dir - && state->last_seq == seq - && state->last_ack == ack - && state->last_end == end - && state->last_win == win) - state->retrans++; - else { - state->last_dir = dir; - state->last_seq = seq; - state->last_ack = ack; - state->last_end = end; - state->last_win = win; - state->retrans = 0; - } - } - res = 1; - } else { - res = 0; - if (sender->flags & IP_CT_TCP_FLAG_BE_LIBERAL || - ip_ct_tcp_be_liberal) - res = 1; - if (!res && LOG_INVALID(IPPROTO_TCP)) - nf_log_packet(PF_INET, 0, skb, NULL, NULL, NULL, - "ip_ct_tcp: %s ", - before(seq, sender->td_maxend + 1) ? - after(end, sender->td_end - receiver->td_maxwin - 1) ? - before(sack, receiver->td_end + 1) ? - after(ack, receiver->td_end - MAXACKWINDOW(sender)) ? "BUG" - : "ACK is under the lower bound (possible overly delayed ACK)" - : "ACK is over the upper bound (ACKed data not seen yet)" - : "SEQ is under the lower bound (already ACKed data retransmitted)" - : "SEQ is over the upper bound (over the window of the receiver)"); - } - - DEBUGP("tcp_in_window: res=%i sender end=%u maxend=%u maxwin=%u " - "receiver end=%u maxend=%u maxwin=%u\n", - res, sender->td_end, sender->td_maxend, sender->td_maxwin, - receiver->td_end, receiver->td_maxend, receiver->td_maxwin); - - return res; -} - -#ifdef CONFIG_IP_NF_NAT_NEEDED -/* Update sender->td_end after NAT successfully mangled the packet */ -void ip_conntrack_tcp_update(struct sk_buff *skb, - struct ip_conntrack *conntrack, - enum ip_conntrack_dir dir) -{ - struct iphdr *iph = skb->nh.iph; - struct tcphdr *tcph = (void *)skb->nh.iph + skb->nh.iph->ihl*4; - __u32 end; -#ifdef DEBUGP_VARS - struct ip_ct_tcp_state *sender = &conntrack->proto.tcp.seen[dir]; - struct ip_ct_tcp_state *receiver = &conntrack->proto.tcp.seen[!dir]; -#endif - - end = segment_seq_plus_len(ntohl(tcph->seq), skb->len, iph, tcph); - - write_lock_bh(&tcp_lock); - /* - * We have to worry for the ack in the reply packet only... - */ - if (after(end, conntrack->proto.tcp.seen[dir].td_end)) - conntrack->proto.tcp.seen[dir].td_end = end; - conntrack->proto.tcp.last_end = end; - write_unlock_bh(&tcp_lock); - DEBUGP("tcp_update: sender end=%u maxend=%u maxwin=%u scale=%i " - "receiver end=%u maxend=%u maxwin=%u scale=%i\n", - sender->td_end, sender->td_maxend, sender->td_maxwin, - sender->td_scale, - receiver->td_end, receiver->td_maxend, receiver->td_maxwin, - receiver->td_scale); -} - -#endif - -#define TH_FIN 0x01 -#define TH_SYN 0x02 -#define TH_RST 0x04 -#define TH_PUSH 0x08 -#define TH_ACK 0x10 -#define TH_URG 0x20 -#define TH_ECE 0x40 -#define TH_CWR 0x80 - -/* table of valid flag combinations - ECE and CWR are always valid */ -static const u8 tcp_valid_flags[(TH_FIN|TH_SYN|TH_RST|TH_PUSH|TH_ACK|TH_URG) + 1] = -{ - [TH_SYN] = 1, - [TH_SYN|TH_PUSH] = 1, - [TH_SYN|TH_URG] = 1, - [TH_SYN|TH_PUSH|TH_URG] = 1, - [TH_SYN|TH_ACK] = 1, - [TH_SYN|TH_ACK|TH_PUSH] = 1, - [TH_RST] = 1, - [TH_RST|TH_ACK] = 1, - [TH_RST|TH_ACK|TH_PUSH] = 1, - [TH_FIN|TH_ACK] = 1, - [TH_ACK] = 1, - [TH_ACK|TH_PUSH] = 1, - [TH_ACK|TH_URG] = 1, - [TH_ACK|TH_URG|TH_PUSH] = 1, - [TH_FIN|TH_ACK|TH_PUSH] = 1, - [TH_FIN|TH_ACK|TH_URG] = 1, - [TH_FIN|TH_ACK|TH_URG|TH_PUSH] = 1, -}; - -/* Protect conntrack agaist broken packets. Code taken from ipt_unclean.c. */ -static int tcp_error(struct sk_buff *skb, - enum ip_conntrack_info *ctinfo, - unsigned int hooknum) -{ - struct iphdr *iph = skb->nh.iph; - struct tcphdr _tcph, *th; - unsigned int tcplen = skb->len - iph->ihl * 4; - u_int8_t tcpflags; - - /* Smaller that minimal TCP header? */ - th = skb_header_pointer(skb, iph->ihl * 4, - sizeof(_tcph), &_tcph); - if (th == NULL) { - if (LOG_INVALID(IPPROTO_TCP)) - nf_log_packet(PF_INET, 0, skb, NULL, NULL, NULL, - "ip_ct_tcp: short packet "); - return -NF_ACCEPT; - } - - /* Not whole TCP header or malformed packet */ - if (th->doff*4 < sizeof(struct tcphdr) || tcplen < th->doff*4) { - if (LOG_INVALID(IPPROTO_TCP)) - nf_log_packet(PF_INET, 0, skb, NULL, NULL, NULL, - "ip_ct_tcp: truncated/malformed packet "); - return -NF_ACCEPT; - } - - /* Checksum invalid? Ignore. - * We skip checking packets on the outgoing path - * because it is assumed to be correct. - */ - /* FIXME: Source route IP option packets --RR */ - if (ip_conntrack_checksum && hooknum == NF_IP_PRE_ROUTING && - nf_ip_checksum(skb, hooknum, iph->ihl * 4, IPPROTO_TCP)) { - if (LOG_INVALID(IPPROTO_TCP)) - nf_log_packet(PF_INET, 0, skb, NULL, NULL, NULL, - "ip_ct_tcp: bad TCP checksum "); - return -NF_ACCEPT; - } - - /* Check TCP flags. */ - tcpflags = (((u_int8_t *)th)[13] & ~(TH_ECE|TH_CWR)); - if (!tcp_valid_flags[tcpflags]) { - if (LOG_INVALID(IPPROTO_TCP)) - nf_log_packet(PF_INET, 0, skb, NULL, NULL, NULL, - "ip_ct_tcp: invalid TCP flag combination "); - return -NF_ACCEPT; - } - - return NF_ACCEPT; -} - -/* Returns verdict for packet, or -1 for invalid. */ -static int tcp_packet(struct ip_conntrack *conntrack, - const struct sk_buff *skb, - enum ip_conntrack_info ctinfo) -{ - enum tcp_conntrack new_state, old_state; - enum ip_conntrack_dir dir; - struct iphdr *iph = skb->nh.iph; - struct tcphdr *th, _tcph; - unsigned long timeout; - unsigned int index; - - th = skb_header_pointer(skb, iph->ihl * 4, - sizeof(_tcph), &_tcph); - BUG_ON(th == NULL); - - write_lock_bh(&tcp_lock); - old_state = conntrack->proto.tcp.state; - dir = CTINFO2DIR(ctinfo); - index = get_conntrack_index(th); - new_state = tcp_conntracks[dir][index][old_state]; - - switch (new_state) { - case TCP_CONNTRACK_IGNORE: - /* Ignored packets: - * - * a) SYN in ORIGINAL - * b) SYN/ACK in REPLY - * c) ACK in reply direction after initial SYN in original. - */ - if (index == TCP_SYNACK_SET - && conntrack->proto.tcp.last_index == TCP_SYN_SET - && conntrack->proto.tcp.last_dir != dir - && ntohl(th->ack_seq) == - conntrack->proto.tcp.last_end) { - /* This SYN/ACK acknowledges a SYN that we earlier - * ignored as invalid. This means that the client and - * the server are both in sync, while the firewall is - * not. We kill this session and block the SYN/ACK so - * that the client cannot but retransmit its SYN and - * thus initiate a clean new session. - */ - write_unlock_bh(&tcp_lock); - if (LOG_INVALID(IPPROTO_TCP)) - nf_log_packet(PF_INET, 0, skb, NULL, NULL, - NULL, "ip_ct_tcp: " - "killing out of sync session "); - if (del_timer(&conntrack->timeout)) - conntrack->timeout.function((unsigned long) - conntrack); - return -NF_DROP; - } - conntrack->proto.tcp.last_index = index; - conntrack->proto.tcp.last_dir = dir; - conntrack->proto.tcp.last_seq = ntohl(th->seq); - conntrack->proto.tcp.last_end = - segment_seq_plus_len(ntohl(th->seq), skb->len, iph, th); - - write_unlock_bh(&tcp_lock); - if (LOG_INVALID(IPPROTO_TCP)) - nf_log_packet(PF_INET, 0, skb, NULL, NULL, NULL, - "ip_ct_tcp: invalid packet ignored "); - return NF_ACCEPT; - case TCP_CONNTRACK_MAX: - /* Invalid packet */ - DEBUGP("ip_ct_tcp: Invalid dir=%i index=%u ostate=%u\n", - dir, get_conntrack_index(th), - old_state); - write_unlock_bh(&tcp_lock); - if (LOG_INVALID(IPPROTO_TCP)) - nf_log_packet(PF_INET, 0, skb, NULL, NULL, NULL, - "ip_ct_tcp: invalid state "); - return -NF_ACCEPT; - case TCP_CONNTRACK_SYN_SENT: - if (old_state < TCP_CONNTRACK_TIME_WAIT) - break; - if ((conntrack->proto.tcp.seen[dir].flags & - IP_CT_TCP_FLAG_CLOSE_INIT) - || after(ntohl(th->seq), - conntrack->proto.tcp.seen[dir].td_end)) { - /* Attempt to reopen a closed connection. - * Delete this connection and look up again. */ - write_unlock_bh(&tcp_lock); - if (del_timer(&conntrack->timeout)) - conntrack->timeout.function((unsigned long) - conntrack); - return -NF_REPEAT; - } else { - write_unlock_bh(&tcp_lock); - if (LOG_INVALID(IPPROTO_TCP)) - nf_log_packet(PF_INET, 0, skb, NULL, NULL, - NULL, "ip_ct_tcp: invalid SYN"); - return -NF_ACCEPT; - } - case TCP_CONNTRACK_CLOSE: - if (index == TCP_RST_SET - && ((test_bit(IPS_SEEN_REPLY_BIT, &conntrack->status) - && conntrack->proto.tcp.last_index == TCP_SYN_SET) - || (!test_bit(IPS_ASSURED_BIT, &conntrack->status) - && conntrack->proto.tcp.last_index == TCP_ACK_SET)) - && ntohl(th->ack_seq) == conntrack->proto.tcp.last_end) { - /* RST sent to invalid SYN or ACK we had let through - * at a) and c) above: - * - * a) SYN was in window then - * c) we hold a half-open connection. - * - * Delete our connection entry. - * We skip window checking, because packet might ACK - * segments we ignored. */ - goto in_window; - } - /* Just fall through */ - default: - /* Keep compilers happy. */ - break; - } - - if (!tcp_in_window(&conntrack->proto.tcp, dir, index, - skb, iph, th)) { - write_unlock_bh(&tcp_lock); - return -NF_ACCEPT; - } - in_window: - /* From now on we have got in-window packets */ - conntrack->proto.tcp.last_index = index; - - DEBUGP("tcp_conntracks: src=%u.%u.%u.%u:%hu dst=%u.%u.%u.%u:%hu " - "syn=%i ack=%i fin=%i rst=%i old=%i new=%i\n", - NIPQUAD(iph->saddr), ntohs(th->source), - NIPQUAD(iph->daddr), ntohs(th->dest), - (th->syn ? 1 : 0), (th->ack ? 1 : 0), - (th->fin ? 1 : 0), (th->rst ? 1 : 0), - old_state, new_state); - - conntrack->proto.tcp.state = new_state; - if (old_state != new_state - && (new_state == TCP_CONNTRACK_FIN_WAIT - || new_state == TCP_CONNTRACK_CLOSE)) - conntrack->proto.tcp.seen[dir].flags |= IP_CT_TCP_FLAG_CLOSE_INIT; - timeout = conntrack->proto.tcp.retrans >= ip_ct_tcp_max_retrans - && *tcp_timeouts[new_state] > ip_ct_tcp_timeout_max_retrans - ? ip_ct_tcp_timeout_max_retrans : *tcp_timeouts[new_state]; - write_unlock_bh(&tcp_lock); - - ip_conntrack_event_cache(IPCT_PROTOINFO_VOLATILE, skb); - if (new_state != old_state) - ip_conntrack_event_cache(IPCT_PROTOINFO, skb); - - if (!test_bit(IPS_SEEN_REPLY_BIT, &conntrack->status)) { - /* If only reply is a RST, we can consider ourselves not to - have an established connection: this is a fairly common - problem case, so we can delete the conntrack - immediately. --RR */ - if (th->rst) { - if (del_timer(&conntrack->timeout)) - conntrack->timeout.function((unsigned long) - conntrack); - return NF_ACCEPT; - } - } else if (!test_bit(IPS_ASSURED_BIT, &conntrack->status) - && (old_state == TCP_CONNTRACK_SYN_RECV - || old_state == TCP_CONNTRACK_ESTABLISHED) - && new_state == TCP_CONNTRACK_ESTABLISHED) { - /* Set ASSURED if we see see valid ack in ESTABLISHED - after SYN_RECV or a valid answer for a picked up - connection. */ - set_bit(IPS_ASSURED_BIT, &conntrack->status); - ip_conntrack_event_cache(IPCT_STATUS, skb); - } - ip_ct_refresh_acct(conntrack, ctinfo, skb, timeout); - - return NF_ACCEPT; -} - -/* Called when a new connection for this protocol found. */ -static int tcp_new(struct ip_conntrack *conntrack, - const struct sk_buff *skb) -{ - enum tcp_conntrack new_state; - struct iphdr *iph = skb->nh.iph; - struct tcphdr *th, _tcph; -#ifdef DEBUGP_VARS - struct ip_ct_tcp_state *sender = &conntrack->proto.tcp.seen[0]; - struct ip_ct_tcp_state *receiver = &conntrack->proto.tcp.seen[1]; -#endif - - th = skb_header_pointer(skb, iph->ihl * 4, - sizeof(_tcph), &_tcph); - BUG_ON(th == NULL); - - /* Don't need lock here: this conntrack not in circulation yet */ - new_state - = tcp_conntracks[0][get_conntrack_index(th)] - [TCP_CONNTRACK_NONE]; - - /* Invalid: delete conntrack */ - if (new_state >= TCP_CONNTRACK_MAX) { - DEBUGP("ip_ct_tcp: invalid new deleting.\n"); - return 0; - } - - if (new_state == TCP_CONNTRACK_SYN_SENT) { - /* SYN packet */ - conntrack->proto.tcp.seen[0].td_end = - segment_seq_plus_len(ntohl(th->seq), skb->len, - iph, th); - conntrack->proto.tcp.seen[0].td_maxwin = ntohs(th->window); - if (conntrack->proto.tcp.seen[0].td_maxwin == 0) - conntrack->proto.tcp.seen[0].td_maxwin = 1; - conntrack->proto.tcp.seen[0].td_maxend = - conntrack->proto.tcp.seen[0].td_end; - - tcp_options(skb, iph, th, &conntrack->proto.tcp.seen[0]); - conntrack->proto.tcp.seen[1].flags = 0; - } else if (ip_ct_tcp_loose == 0) { - /* Don't try to pick up connections. */ - return 0; - } else { - /* - * We are in the middle of a connection, - * its history is lost for us. - * Let's try to use the data from the packet. - */ - conntrack->proto.tcp.seen[0].td_end = - segment_seq_plus_len(ntohl(th->seq), skb->len, - iph, th); - conntrack->proto.tcp.seen[0].td_maxwin = ntohs(th->window); - if (conntrack->proto.tcp.seen[0].td_maxwin == 0) - conntrack->proto.tcp.seen[0].td_maxwin = 1; - conntrack->proto.tcp.seen[0].td_maxend = - conntrack->proto.tcp.seen[0].td_end + - conntrack->proto.tcp.seen[0].td_maxwin; - conntrack->proto.tcp.seen[0].td_scale = 0; - - /* We assume SACK and liberal window checking to handle - * window scaling */ - conntrack->proto.tcp.seen[0].flags = - conntrack->proto.tcp.seen[1].flags = IP_CT_TCP_FLAG_SACK_PERM | - IP_CT_TCP_FLAG_BE_LIBERAL; - } - - conntrack->proto.tcp.seen[1].td_end = 0; - conntrack->proto.tcp.seen[1].td_maxend = 0; - conntrack->proto.tcp.seen[1].td_maxwin = 1; - conntrack->proto.tcp.seen[1].td_scale = 0; - - /* tcp_packet will set them */ - conntrack->proto.tcp.state = TCP_CONNTRACK_NONE; - conntrack->proto.tcp.last_index = TCP_NONE_SET; - - DEBUGP("tcp_new: sender end=%u maxend=%u maxwin=%u scale=%i " - "receiver end=%u maxend=%u maxwin=%u scale=%i\n", - sender->td_end, sender->td_maxend, sender->td_maxwin, - sender->td_scale, - receiver->td_end, receiver->td_maxend, receiver->td_maxwin, - receiver->td_scale); - return 1; -} - -struct ip_conntrack_protocol ip_conntrack_protocol_tcp = -{ - .proto = IPPROTO_TCP, - .name = "tcp", - .pkt_to_tuple = tcp_pkt_to_tuple, - .invert_tuple = tcp_invert_tuple, - .print_tuple = tcp_print_tuple, - .print_conntrack = tcp_print_conntrack, - .packet = tcp_packet, - .new = tcp_new, - .error = tcp_error, -#if defined(CONFIG_IP_NF_CONNTRACK_NETLINK) || \ - defined(CONFIG_IP_NF_CONNTRACK_NETLINK_MODULE) - .to_nfattr = tcp_to_nfattr, - .from_nfattr = nfattr_to_tcp, - .tuple_to_nfattr = ip_ct_port_tuple_to_nfattr, - .nfattr_to_tuple = ip_ct_port_nfattr_to_tuple, -#endif -}; diff -puN net/ipv4/netfilter/ip_conntrack_proto_udp.c~git-net /dev/null --- a/net/ipv4/netfilter/ip_conntrack_proto_udp.c +++ /dev/null @@ -1,148 +0,0 @@ -/* (C) 1999-2001 Paul `Rusty' Russell - * (C) 2002-2004 Netfilter Core Team - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - */ - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -unsigned int ip_ct_udp_timeout __read_mostly = 30*HZ; -unsigned int ip_ct_udp_timeout_stream __read_mostly = 180*HZ; - -static int udp_pkt_to_tuple(const struct sk_buff *skb, - unsigned int dataoff, - struct ip_conntrack_tuple *tuple) -{ - struct udphdr _hdr, *hp; - - /* Actually only need first 8 bytes. */ - hp = skb_header_pointer(skb, dataoff, sizeof(_hdr), &_hdr); - if (hp == NULL) - return 0; - - tuple->src.u.udp.port = hp->source; - tuple->dst.u.udp.port = hp->dest; - - return 1; -} - -static int udp_invert_tuple(struct ip_conntrack_tuple *tuple, - const struct ip_conntrack_tuple *orig) -{ - tuple->src.u.udp.port = orig->dst.u.udp.port; - tuple->dst.u.udp.port = orig->src.u.udp.port; - return 1; -} - -/* Print out the per-protocol part of the tuple. */ -static int udp_print_tuple(struct seq_file *s, - const struct ip_conntrack_tuple *tuple) -{ - return seq_printf(s, "sport=%hu dport=%hu ", - ntohs(tuple->src.u.udp.port), - ntohs(tuple->dst.u.udp.port)); -} - -/* Print out the private part of the conntrack. */ -static int udp_print_conntrack(struct seq_file *s, - const struct ip_conntrack *conntrack) -{ - return 0; -} - -/* Returns verdict for packet, and may modify conntracktype */ -static int udp_packet(struct ip_conntrack *conntrack, - const struct sk_buff *skb, - enum ip_conntrack_info ctinfo) -{ - /* If we've seen traffic both ways, this is some kind of UDP - stream. Extend timeout. */ - if (test_bit(IPS_SEEN_REPLY_BIT, &conntrack->status)) { - ip_ct_refresh_acct(conntrack, ctinfo, skb, - ip_ct_udp_timeout_stream); - /* Also, more likely to be important, and not a probe */ - if (!test_and_set_bit(IPS_ASSURED_BIT, &conntrack->status)) - ip_conntrack_event_cache(IPCT_STATUS, skb); - } else - ip_ct_refresh_acct(conntrack, ctinfo, skb, ip_ct_udp_timeout); - - return NF_ACCEPT; -} - -/* Called when a new connection for this protocol found. */ -static int udp_new(struct ip_conntrack *conntrack, const struct sk_buff *skb) -{ - return 1; -} - -static int udp_error(struct sk_buff *skb, enum ip_conntrack_info *ctinfo, - unsigned int hooknum) -{ - struct iphdr *iph = skb->nh.iph; - unsigned int udplen = skb->len - iph->ihl * 4; - struct udphdr _hdr, *hdr; - - /* Header is too small? */ - hdr = skb_header_pointer(skb, iph->ihl*4, sizeof(_hdr), &_hdr); - if (hdr == NULL) { - if (LOG_INVALID(IPPROTO_UDP)) - nf_log_packet(PF_INET, 0, skb, NULL, NULL, NULL, - "ip_ct_udp: short packet "); - return -NF_ACCEPT; - } - - /* Truncated/malformed packets */ - if (ntohs(hdr->len) > udplen || ntohs(hdr->len) < sizeof(*hdr)) { - if (LOG_INVALID(IPPROTO_UDP)) - nf_log_packet(PF_INET, 0, skb, NULL, NULL, NULL, - "ip_ct_udp: truncated/malformed packet "); - return -NF_ACCEPT; - } - - /* Packet with no checksum */ - if (!hdr->check) - return NF_ACCEPT; - - /* Checksum invalid? Ignore. - * We skip checking packets on the outgoing path - * because the checksum is assumed to be correct. - * FIXME: Source route IP option packets --RR */ - if (ip_conntrack_checksum && hooknum == NF_IP_PRE_ROUTING && - nf_ip_checksum(skb, hooknum, iph->ihl * 4, IPPROTO_UDP)) { - if (LOG_INVALID(IPPROTO_UDP)) - nf_log_packet(PF_INET, 0, skb, NULL, NULL, NULL, - "ip_ct_udp: bad UDP checksum "); - return -NF_ACCEPT; - } - - return NF_ACCEPT; -} - -struct ip_conntrack_protocol ip_conntrack_protocol_udp = -{ - .proto = IPPROTO_UDP, - .name = "udp", - .pkt_to_tuple = udp_pkt_to_tuple, - .invert_tuple = udp_invert_tuple, - .print_tuple = udp_print_tuple, - .print_conntrack = udp_print_conntrack, - .packet = udp_packet, - .new = udp_new, - .error = udp_error, -#if defined(CONFIG_IP_NF_CONNTRACK_NETLINK) || \ - defined(CONFIG_IP_NF_CONNTRACK_NETLINK_MODULE) - .tuple_to_nfattr = ip_ct_port_tuple_to_nfattr, - .nfattr_to_tuple = ip_ct_port_nfattr_to_tuple, -#endif -}; diff -puN net/ipv4/netfilter/ip_conntrack_sip.c~git-net /dev/null --- a/net/ipv4/netfilter/ip_conntrack_sip.c +++ /dev/null @@ -1,520 +0,0 @@ -/* SIP extension for IP connection tracking. - * - * (C) 2005 by Christian Hentschel - * based on RR's ip_conntrack_ftp.c and other modules. - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - */ - -#include -#include -#include -#include -#include -#include - -#include -#include -#include -#include - -#if 0 -#define DEBUGP printk -#else -#define DEBUGP(format, args...) -#endif - -MODULE_LICENSE("GPL"); -MODULE_AUTHOR("Christian Hentschel "); -MODULE_DESCRIPTION("SIP connection tracking helper"); - -#define MAX_PORTS 8 -static unsigned short ports[MAX_PORTS]; -static int ports_c; -module_param_array(ports, ushort, &ports_c, 0400); -MODULE_PARM_DESC(ports, "port numbers of sip servers"); - -static unsigned int sip_timeout = SIP_TIMEOUT; -module_param(sip_timeout, uint, 0600); -MODULE_PARM_DESC(sip_timeout, "timeout for the master SIP session"); - -unsigned int (*ip_nat_sip_hook)(struct sk_buff **pskb, - enum ip_conntrack_info ctinfo, - struct ip_conntrack *ct, - const char **dptr); -EXPORT_SYMBOL_GPL(ip_nat_sip_hook); - -unsigned int (*ip_nat_sdp_hook)(struct sk_buff **pskb, - enum ip_conntrack_info ctinfo, - struct ip_conntrack_expect *exp, - const char *dptr); -EXPORT_SYMBOL_GPL(ip_nat_sdp_hook); - -static int digits_len(const char *dptr, const char *limit, int *shift); -static int epaddr_len(const char *dptr, const char *limit, int *shift); -static int skp_digits_len(const char *dptr, const char *limit, int *shift); -static int skp_epaddr_len(const char *dptr, const char *limit, int *shift); - -struct sip_header_nfo { - const char *lname; - const char *sname; - const char *ln_str; - size_t lnlen; - size_t snlen; - size_t ln_strlen; - int case_sensitive; - int (*match_len)(const char *, const char *, int *); -}; - -static struct sip_header_nfo ct_sip_hdrs[] = { - [POS_REG_REQ_URI] = { /* SIP REGISTER request URI */ - .lname = "sip:", - .lnlen = sizeof("sip:") - 1, - .ln_str = ":", - .ln_strlen = sizeof(":") - 1, - .match_len = epaddr_len - }, - [POS_REQ_URI] = { /* SIP request URI */ - .lname = "sip:", - .lnlen = sizeof("sip:") - 1, - .ln_str = "@", - .ln_strlen = sizeof("@") - 1, - .match_len = epaddr_len - }, - [POS_FROM] = { /* SIP From header */ - .lname = "From:", - .lnlen = sizeof("From:") - 1, - .sname = "\r\nf:", - .snlen = sizeof("\r\nf:") - 1, - .ln_str = "sip:", - .ln_strlen = sizeof("sip:") - 1, - .match_len = skp_epaddr_len, - }, - [POS_TO] = { /* SIP To header */ - .lname = "To:", - .lnlen = sizeof("To:") - 1, - .sname = "\r\nt:", - .snlen = sizeof("\r\nt:") - 1, - .ln_str = "sip:", - .ln_strlen = sizeof("sip:") - 1, - .match_len = skp_epaddr_len, - }, - [POS_VIA] = { /* SIP Via header */ - .lname = "Via:", - .lnlen = sizeof("Via:") - 1, - .sname = "\r\nv:", - .snlen = sizeof("\r\nv:") - 1, /* rfc3261 "\r\n" */ - .ln_str = "UDP ", - .ln_strlen = sizeof("UDP ") - 1, - .match_len = epaddr_len, - }, - [POS_CONTACT] = { /* SIP Contact header */ - .lname = "Contact:", - .lnlen = sizeof("Contact:") - 1, - .sname = "\r\nm:", - .snlen = sizeof("\r\nm:") - 1, - .ln_str = "sip:", - .ln_strlen = sizeof("sip:") - 1, - .match_len = skp_epaddr_len - }, - [POS_CONTENT] = { /* SIP Content length header */ - .lname = "Content-Length:", - .lnlen = sizeof("Content-Length:") - 1, - .sname = "\r\nl:", - .snlen = sizeof("\r\nl:") - 1, - .ln_str = ":", - .ln_strlen = sizeof(":") - 1, - .match_len = skp_digits_len - }, - [POS_MEDIA] = { /* SDP media info */ - .case_sensitive = 1, - .lname = "\nm=", - .lnlen = sizeof("\nm=") - 1, - .sname = "\rm=", - .snlen = sizeof("\rm=") - 1, - .ln_str = "audio ", - .ln_strlen = sizeof("audio ") - 1, - .match_len = digits_len - }, - [POS_OWNER] = { /* SDP owner address*/ - .case_sensitive = 1, - .lname = "\no=", - .lnlen = sizeof("\no=") - 1, - .sname = "\ro=", - .snlen = sizeof("\ro=") - 1, - .ln_str = "IN IP4 ", - .ln_strlen = sizeof("IN IP4 ") - 1, - .match_len = epaddr_len - }, - [POS_CONNECTION] = { /* SDP connection info */ - .case_sensitive = 1, - .lname = "\nc=", - .lnlen = sizeof("\nc=") - 1, - .sname = "\rc=", - .snlen = sizeof("\rc=") - 1, - .ln_str = "IN IP4 ", - .ln_strlen = sizeof("IN IP4 ") - 1, - .match_len = epaddr_len - }, - [POS_SDP_HEADER] = { /* SDP version header */ - .case_sensitive = 1, - .lname = "\nv=", - .lnlen = sizeof("\nv=") - 1, - .sname = "\rv=", - .snlen = sizeof("\rv=") - 1, - .ln_str = "=", - .ln_strlen = sizeof("=") - 1, - .match_len = digits_len - } -}; - -/* get line lenght until first CR or LF seen. */ -int ct_sip_lnlen(const char *line, const char *limit) -{ - const char *k = line; - - while ((line <= limit) && (*line == '\r' || *line == '\n')) - line++; - - while (line <= limit) { - if (*line == '\r' || *line == '\n') - break; - line++; - } - return line - k; -} -EXPORT_SYMBOL_GPL(ct_sip_lnlen); - -/* Linear string search, case sensitive. */ -const char *ct_sip_search(const char *needle, const char *haystack, - size_t needle_len, size_t haystack_len, - int case_sensitive) -{ - const char *limit = haystack + (haystack_len - needle_len); - - while (haystack <= limit) { - if (case_sensitive) { - if (strncmp(haystack, needle, needle_len) == 0) - return haystack; - } else { - if (strnicmp(haystack, needle, needle_len) == 0) - return haystack; - } - haystack++; - } - return NULL; -} -EXPORT_SYMBOL_GPL(ct_sip_search); - -static int digits_len(const char *dptr, const char *limit, int *shift) -{ - int len = 0; - while (dptr <= limit && isdigit(*dptr)) { - dptr++; - len++; - } - return len; -} - -/* get digits lenght, skiping blank spaces. */ -static int skp_digits_len(const char *dptr, const char *limit, int *shift) -{ - for (; dptr <= limit && *dptr == ' '; dptr++) - (*shift)++; - - return digits_len(dptr, limit, shift); -} - -/* Simple ipaddr parser.. */ -static int parse_ipaddr(const char *cp, const char **endp, - __be32 *ipaddr, const char *limit) -{ - unsigned long int val; - int i, digit = 0; - - for (i = 0, *ipaddr = 0; cp <= limit && i < 4; i++) { - digit = 0; - if (!isdigit(*cp)) - break; - - val = simple_strtoul(cp, (char **)&cp, 10); - if (val > 0xFF) - return -1; - - ((u_int8_t *)ipaddr)[i] = val; - digit = 1; - - if (*cp != '.') - break; - cp++; - } - if (!digit) - return -1; - - if (endp) - *endp = cp; - - return 0; -} - -/* skip ip address. returns it lenght. */ -static int epaddr_len(const char *dptr, const char *limit, int *shift) -{ - const char *aux = dptr; - __be32 ip; - - if (parse_ipaddr(dptr, &dptr, &ip, limit) < 0) { - DEBUGP("ip: %s parse failed.!\n", dptr); - return 0; - } - - /* Port number */ - if (*dptr == ':') { - dptr++; - dptr += digits_len(dptr, limit, shift); - } - return dptr - aux; -} - -/* get address length, skiping user info. */ -static int skp_epaddr_len(const char *dptr, const char *limit, int *shift) -{ - int s = *shift; - - /* Search for @, but stop at the end of the line. - * We are inside a sip: URI, so we don't need to worry about - * continuation lines. */ - while (dptr <= limit && - *dptr != '@' && *dptr != '\r' && *dptr != '\n') { - (*shift)++; - dptr++; - } - - if (dptr <= limit && *dptr == '@') { - dptr++; - (*shift)++; - } else - *shift = s; - - return epaddr_len(dptr, limit, shift); -} - -/* Returns 0 if not found, -1 error parsing. */ -int ct_sip_get_info(const char *dptr, size_t dlen, - unsigned int *matchoff, - unsigned int *matchlen, - enum sip_header_pos pos) -{ - struct sip_header_nfo *hnfo = &ct_sip_hdrs[pos]; - const char *limit, *aux, *k = dptr; - int shift = 0; - - limit = dptr + (dlen - hnfo->lnlen); - - while (dptr <= limit) { - if ((strncmp(dptr, hnfo->lname, hnfo->lnlen) != 0) && - (hnfo->sname == NULL || - strncmp(dptr, hnfo->sname, hnfo->snlen) != 0)) { - dptr++; - continue; - } - aux = ct_sip_search(hnfo->ln_str, dptr, hnfo->ln_strlen, - ct_sip_lnlen(dptr, limit), - hnfo->case_sensitive); - if (!aux) { - DEBUGP("'%s' not found in '%s'.\n", hnfo->ln_str, - hnfo->lname); - return -1; - } - aux += hnfo->ln_strlen; - - *matchlen = hnfo->match_len(aux, limit, &shift); - if (!*matchlen) - return -1; - - *matchoff = (aux - k) + shift; - - DEBUGP("%s match succeeded! - len: %u\n", hnfo->lname, - *matchlen); - return 1; - } - DEBUGP("%s header not found.\n", hnfo->lname); - return 0; -} -EXPORT_SYMBOL_GPL(ct_sip_get_info); - -static int set_expected_rtp(struct sk_buff **pskb, - struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - __be32 ipaddr, u_int16_t port, - const char *dptr) -{ - struct ip_conntrack_expect *exp; - enum ip_conntrack_dir dir = CTINFO2DIR(ctinfo); - int ret; - typeof(ip_nat_sdp_hook) ip_nat_sdp; - - exp = ip_conntrack_expect_alloc(ct); - if (exp == NULL) - return NF_DROP; - - exp->tuple.src.ip = ct->tuplehash[!dir].tuple.src.ip; - exp->tuple.src.u.udp.port = 0; - exp->tuple.dst.ip = ipaddr; - exp->tuple.dst.u.udp.port = htons(port); - exp->tuple.dst.protonum = IPPROTO_UDP; - - exp->mask.src.ip = htonl(0xFFFFFFFF); - exp->mask.src.u.udp.port = 0; - exp->mask.dst.ip = htonl(0xFFFFFFFF); - exp->mask.dst.u.udp.port = htons(0xFFFF); - exp->mask.dst.protonum = 0xFF; - - exp->expectfn = NULL; - exp->flags = 0; - - ip_nat_sdp = rcu_dereference(ip_nat_sdp_hook); - if (ip_nat_sdp) - ret = ip_nat_sdp(pskb, ctinfo, exp, dptr); - else { - if (ip_conntrack_expect_related(exp) != 0) - ret = NF_DROP; - else - ret = NF_ACCEPT; - } - ip_conntrack_expect_put(exp); - - return ret; -} - -static int sip_help(struct sk_buff **pskb, - struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo) -{ - unsigned int dataoff, datalen; - const char *dptr; - int ret = NF_ACCEPT; - int matchoff, matchlen; - __be32 ipaddr; - u_int16_t port; - typeof(ip_nat_sip_hook) ip_nat_sip; - - /* No Data ? */ - dataoff = (*pskb)->nh.iph->ihl*4 + sizeof(struct udphdr); - if (dataoff >= (*pskb)->len) { - DEBUGP("skb->len = %u\n", (*pskb)->len); - return NF_ACCEPT; - } - - ip_ct_refresh(ct, *pskb, sip_timeout * HZ); - - if (!skb_is_nonlinear(*pskb)) - dptr = (*pskb)->data + dataoff; - else { - DEBUGP("Copy of skbuff not supported yet.\n"); - goto out; - } - - ip_nat_sip = rcu_dereference(ip_nat_sip_hook); - if (ip_nat_sip) { - if (!ip_nat_sip(pskb, ctinfo, ct, &dptr)) { - ret = NF_DROP; - goto out; - } - } - - /* After this point NAT, could have mangled skb, so - we need to recalculate payload lenght. */ - datalen = (*pskb)->len - dataoff; - - if (datalen < (sizeof("SIP/2.0 200") - 1)) - goto out; - - /* RTP info only in some SDP pkts */ - if (memcmp(dptr, "INVITE", sizeof("INVITE") - 1) != 0 && - memcmp(dptr, "SIP/2.0 200", sizeof("SIP/2.0 200") - 1) != 0) { - goto out; - } - /* Get ip and port address from SDP packet. */ - if (ct_sip_get_info(dptr, datalen, &matchoff, &matchlen, - POS_CONNECTION) > 0) { - - /* We'll drop only if there are parse problems. */ - if (parse_ipaddr(dptr + matchoff, NULL, &ipaddr, - dptr + datalen) < 0) { - ret = NF_DROP; - goto out; - } - if (ct_sip_get_info(dptr, datalen, &matchoff, &matchlen, - POS_MEDIA) > 0) { - - port = simple_strtoul(dptr + matchoff, NULL, 10); - if (port < 1024) { - ret = NF_DROP; - goto out; - } - ret = set_expected_rtp(pskb, ct, ctinfo, - ipaddr, port, dptr); - } - } -out: - return ret; -} - -static struct ip_conntrack_helper sip[MAX_PORTS]; -static char sip_names[MAX_PORTS][10]; - -static void fini(void) -{ - int i; - for (i = 0; i < ports_c; i++) { - DEBUGP("unregistering helper for port %d\n", ports[i]); - ip_conntrack_helper_unregister(&sip[i]); - } -} - -static int __init init(void) -{ - int i, ret; - char *tmpname; - - if (ports_c == 0) - ports[ports_c++] = SIP_PORT; - - for (i = 0; i < ports_c; i++) { - /* Create helper structure */ - memset(&sip[i], 0, sizeof(struct ip_conntrack_helper)); - - sip[i].tuple.dst.protonum = IPPROTO_UDP; - sip[i].tuple.src.u.udp.port = htons(ports[i]); - sip[i].mask.src.u.udp.port = htons(0xFFFF); - sip[i].mask.dst.protonum = 0xFF; - sip[i].max_expected = 2; - sip[i].timeout = 3 * 60; /* 3 minutes */ - sip[i].me = THIS_MODULE; - sip[i].help = sip_help; - - tmpname = &sip_names[i][0]; - if (ports[i] == SIP_PORT) - sprintf(tmpname, "sip"); - else - sprintf(tmpname, "sip-%d", i); - sip[i].name = tmpname; - - DEBUGP("port #%d: %d\n", i, ports[i]); - - ret = ip_conntrack_helper_register(&sip[i]); - if (ret) { - printk("ERROR registering helper for port %d\n", - ports[i]); - fini(); - return ret; - } - } - return 0; -} - -module_init(init); -module_exit(fini); diff -puN net/ipv4/netfilter/ip_conntrack_standalone.c~git-net /dev/null --- a/net/ipv4/netfilter/ip_conntrack_standalone.c +++ /dev/null @@ -1,962 +0,0 @@ -/* This file contains all the functions required for the standalone - ip_conntrack module. - - These are not required by the compatibility layer. -*/ - -/* (C) 1999-2001 Paul `Rusty' Russell - * (C) 2002-2005 Netfilter Core Team - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - */ - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#ifdef CONFIG_SYSCTL -#include -#endif -#include -#include -#include - -#include -#include -#include -#include - -#if 0 -#define DEBUGP printk -#else -#define DEBUGP(format, args...) -#endif - -MODULE_LICENSE("GPL"); - -extern atomic_t ip_conntrack_count; -DECLARE_PER_CPU(struct ip_conntrack_stat, ip_conntrack_stat); - -static int kill_proto(struct ip_conntrack *i, void *data) -{ - return (i->tuplehash[IP_CT_DIR_ORIGINAL].tuple.dst.protonum == - *((u_int8_t *) data)); -} - -#ifdef CONFIG_PROC_FS -static int -print_tuple(struct seq_file *s, const struct ip_conntrack_tuple *tuple, - struct ip_conntrack_protocol *proto) -{ - seq_printf(s, "src=%u.%u.%u.%u dst=%u.%u.%u.%u ", - NIPQUAD(tuple->src.ip), NIPQUAD(tuple->dst.ip)); - return proto->print_tuple(s, tuple); -} - -#ifdef CONFIG_IP_NF_CT_ACCT -static unsigned int -seq_print_counters(struct seq_file *s, - const struct ip_conntrack_counter *counter) -{ - return seq_printf(s, "packets=%llu bytes=%llu ", - (unsigned long long)counter->packets, - (unsigned long long)counter->bytes); -} -#else -#define seq_print_counters(x, y) 0 -#endif - -struct ct_iter_state { - unsigned int bucket; -}; - -static struct list_head *ct_get_first(struct seq_file *seq) -{ - struct ct_iter_state *st = seq->private; - - for (st->bucket = 0; - st->bucket < ip_conntrack_htable_size; - st->bucket++) { - if (!list_empty(&ip_conntrack_hash[st->bucket])) - return ip_conntrack_hash[st->bucket].next; - } - return NULL; -} - -static struct list_head *ct_get_next(struct seq_file *seq, struct list_head *head) -{ - struct ct_iter_state *st = seq->private; - - head = head->next; - while (head == &ip_conntrack_hash[st->bucket]) { - if (++st->bucket >= ip_conntrack_htable_size) - return NULL; - head = ip_conntrack_hash[st->bucket].next; - } - return head; -} - -static struct list_head *ct_get_idx(struct seq_file *seq, loff_t pos) -{ - struct list_head *head = ct_get_first(seq); - - if (head) - while (pos && (head = ct_get_next(seq, head))) - pos--; - return pos ? NULL : head; -} - -static void *ct_seq_start(struct seq_file *seq, loff_t *pos) -{ - read_lock_bh(&ip_conntrack_lock); - return ct_get_idx(seq, *pos); -} - -static void *ct_seq_next(struct seq_file *s, void *v, loff_t *pos) -{ - (*pos)++; - return ct_get_next(s, v); -} - -static void ct_seq_stop(struct seq_file *s, void *v) -{ - read_unlock_bh(&ip_conntrack_lock); -} - -static int ct_seq_show(struct seq_file *s, void *v) -{ - const struct ip_conntrack_tuple_hash *hash = v; - const struct ip_conntrack *conntrack = tuplehash_to_ctrack(hash); - struct ip_conntrack_protocol *proto; - - IP_NF_ASSERT(conntrack); - - /* we only want to print DIR_ORIGINAL */ - if (DIRECTION(hash)) - return 0; - - proto = __ip_conntrack_proto_find(conntrack->tuplehash[IP_CT_DIR_ORIGINAL].tuple.dst.protonum); - IP_NF_ASSERT(proto); - - if (seq_printf(s, "%-8s %u %ld ", - proto->name, - conntrack->tuplehash[IP_CT_DIR_ORIGINAL].tuple.dst.protonum, - timer_pending(&conntrack->timeout) - ? (long)(conntrack->timeout.expires - jiffies)/HZ - : 0) != 0) - return -ENOSPC; - - if (proto->print_conntrack(s, conntrack)) - return -ENOSPC; - - if (print_tuple(s, &conntrack->tuplehash[IP_CT_DIR_ORIGINAL].tuple, - proto)) - return -ENOSPC; - - if (seq_print_counters(s, &conntrack->counters[IP_CT_DIR_ORIGINAL])) - return -ENOSPC; - - if (!(test_bit(IPS_SEEN_REPLY_BIT, &conntrack->status))) - if (seq_printf(s, "[UNREPLIED] ")) - return -ENOSPC; - - if (print_tuple(s, &conntrack->tuplehash[IP_CT_DIR_REPLY].tuple, - proto)) - return -ENOSPC; - - if (seq_print_counters(s, &conntrack->counters[IP_CT_DIR_REPLY])) - return -ENOSPC; - - if (test_bit(IPS_ASSURED_BIT, &conntrack->status)) - if (seq_printf(s, "[ASSURED] ")) - return -ENOSPC; - -#if defined(CONFIG_IP_NF_CONNTRACK_MARK) - if (seq_printf(s, "mark=%u ", conntrack->mark)) - return -ENOSPC; -#endif - -#ifdef CONFIG_IP_NF_CONNTRACK_SECMARK - if (seq_printf(s, "secmark=%u ", conntrack->secmark)) - return -ENOSPC; -#endif - - if (seq_printf(s, "use=%u\n", atomic_read(&conntrack->ct_general.use))) - return -ENOSPC; - - return 0; -} - -static struct seq_operations ct_seq_ops = { - .start = ct_seq_start, - .next = ct_seq_next, - .stop = ct_seq_stop, - .show = ct_seq_show -}; - -static int ct_open(struct inode *inode, struct file *file) -{ - struct seq_file *seq; - struct ct_iter_state *st; - int ret; - - st = kmalloc(sizeof(struct ct_iter_state), GFP_KERNEL); - if (st == NULL) - return -ENOMEM; - ret = seq_open(file, &ct_seq_ops); - if (ret) - goto out_free; - seq = file->private_data; - seq->private = st; - memset(st, 0, sizeof(struct ct_iter_state)); - return ret; -out_free: - kfree(st); - return ret; -} - -static const struct file_operations ct_file_ops = { - .owner = THIS_MODULE, - .open = ct_open, - .read = seq_read, - .llseek = seq_lseek, - .release = seq_release_private, -}; - -/* expects */ -static void *exp_seq_start(struct seq_file *s, loff_t *pos) -{ - struct list_head *e = &ip_conntrack_expect_list; - loff_t i; - - /* strange seq_file api calls stop even if we fail, - * thus we need to grab lock since stop unlocks */ - read_lock_bh(&ip_conntrack_lock); - - if (list_empty(e)) - return NULL; - - for (i = 0; i <= *pos; i++) { - e = e->next; - if (e == &ip_conntrack_expect_list) - return NULL; - } - return e; -} - -static void *exp_seq_next(struct seq_file *s, void *v, loff_t *pos) -{ - struct list_head *e = v; - - ++*pos; - e = e->next; - - if (e == &ip_conntrack_expect_list) - return NULL; - - return e; -} - -static void exp_seq_stop(struct seq_file *s, void *v) -{ - read_unlock_bh(&ip_conntrack_lock); -} - -static int exp_seq_show(struct seq_file *s, void *v) -{ - struct ip_conntrack_expect *expect = v; - - if (expect->timeout.function) - seq_printf(s, "%ld ", timer_pending(&expect->timeout) - ? (long)(expect->timeout.expires - jiffies)/HZ : 0); - else - seq_printf(s, "- "); - - seq_printf(s, "proto=%u ", expect->tuple.dst.protonum); - - print_tuple(s, &expect->tuple, - __ip_conntrack_proto_find(expect->tuple.dst.protonum)); - return seq_putc(s, '\n'); -} - -static struct seq_operations exp_seq_ops = { - .start = exp_seq_start, - .next = exp_seq_next, - .stop = exp_seq_stop, - .show = exp_seq_show -}; - -static int exp_open(struct inode *inode, struct file *file) -{ - return seq_open(file, &exp_seq_ops); -} - -static const struct file_operations exp_file_ops = { - .owner = THIS_MODULE, - .open = exp_open, - .read = seq_read, - .llseek = seq_lseek, - .release = seq_release -}; - -static void *ct_cpu_seq_start(struct seq_file *seq, loff_t *pos) -{ - int cpu; - - if (*pos == 0) - return SEQ_START_TOKEN; - - for (cpu = *pos-1; cpu < NR_CPUS; ++cpu) { - if (!cpu_possible(cpu)) - continue; - *pos = cpu+1; - return &per_cpu(ip_conntrack_stat, cpu); - } - - return NULL; -} - -static void *ct_cpu_seq_next(struct seq_file *seq, void *v, loff_t *pos) -{ - int cpu; - - for (cpu = *pos; cpu < NR_CPUS; ++cpu) { - if (!cpu_possible(cpu)) - continue; - *pos = cpu+1; - return &per_cpu(ip_conntrack_stat, cpu); - } - - return NULL; -} - -static void ct_cpu_seq_stop(struct seq_file *seq, void *v) -{ -} - -static int ct_cpu_seq_show(struct seq_file *seq, void *v) -{ - unsigned int nr_conntracks = atomic_read(&ip_conntrack_count); - struct ip_conntrack_stat *st = v; - - if (v == SEQ_START_TOKEN) { - seq_printf(seq, "entries searched found new invalid ignore delete delete_list insert insert_failed drop early_drop icmp_error expect_new expect_create expect_delete\n"); - return 0; - } - - seq_printf(seq, "%08x %08x %08x %08x %08x %08x %08x %08x " - "%08x %08x %08x %08x %08x %08x %08x %08x \n", - nr_conntracks, - st->searched, - st->found, - st->new, - st->invalid, - st->ignore, - st->delete, - st->delete_list, - st->insert, - st->insert_failed, - st->drop, - st->early_drop, - st->error, - - st->expect_new, - st->expect_create, - st->expect_delete - ); - return 0; -} - -static struct seq_operations ct_cpu_seq_ops = { - .start = ct_cpu_seq_start, - .next = ct_cpu_seq_next, - .stop = ct_cpu_seq_stop, - .show = ct_cpu_seq_show, -}; - -static int ct_cpu_seq_open(struct inode *inode, struct file *file) -{ - return seq_open(file, &ct_cpu_seq_ops); -} - -static const struct file_operations ct_cpu_seq_fops = { - .owner = THIS_MODULE, - .open = ct_cpu_seq_open, - .read = seq_read, - .llseek = seq_lseek, - .release = seq_release_private, -}; -#endif - -static unsigned int ip_confirm(unsigned int hooknum, - struct sk_buff **pskb, - const struct net_device *in, - const struct net_device *out, - int (*okfn)(struct sk_buff *)) -{ - /* We've seen it coming out the other side: confirm it */ - return ip_conntrack_confirm(pskb); -} - -static unsigned int ip_conntrack_help(unsigned int hooknum, - struct sk_buff **pskb, - const struct net_device *in, - const struct net_device *out, - int (*okfn)(struct sk_buff *)) -{ - struct ip_conntrack *ct; - enum ip_conntrack_info ctinfo; - - /* This is where we call the helper: as the packet goes out. */ - ct = ip_conntrack_get(*pskb, &ctinfo); - if (ct && ct->helper && ctinfo != IP_CT_RELATED + IP_CT_IS_REPLY) { - unsigned int ret; - ret = ct->helper->help(pskb, ct, ctinfo); - if (ret != NF_ACCEPT) - return ret; - } - return NF_ACCEPT; -} - -static unsigned int ip_conntrack_defrag(unsigned int hooknum, - struct sk_buff **pskb, - const struct net_device *in, - const struct net_device *out, - int (*okfn)(struct sk_buff *)) -{ -#if !defined(CONFIG_IP_NF_NAT) && !defined(CONFIG_IP_NF_NAT_MODULE) - /* Previously seen (loopback)? Ignore. Do this before - fragment check. */ - if ((*pskb)->nfct) - return NF_ACCEPT; -#endif - - /* Gather fragments. */ - if ((*pskb)->nh.iph->frag_off & htons(IP_MF|IP_OFFSET)) { - *pskb = ip_ct_gather_frags(*pskb, - hooknum == NF_IP_PRE_ROUTING ? - IP_DEFRAG_CONNTRACK_IN : - IP_DEFRAG_CONNTRACK_OUT); - if (!*pskb) - return NF_STOLEN; - } - return NF_ACCEPT; -} - -static unsigned int ip_conntrack_local(unsigned int hooknum, - struct sk_buff **pskb, - const struct net_device *in, - const struct net_device *out, - int (*okfn)(struct sk_buff *)) -{ - /* root is playing with raw sockets. */ - if ((*pskb)->len < sizeof(struct iphdr) - || (*pskb)->nh.iph->ihl * 4 < sizeof(struct iphdr)) { - if (net_ratelimit()) - printk("ipt_hook: happy cracking.\n"); - return NF_ACCEPT; - } - return ip_conntrack_in(hooknum, pskb, in, out, okfn); -} - -/* Connection tracking may drop packets, but never alters them, so - make it the first hook. */ -static struct nf_hook_ops ip_conntrack_ops[] = { - { - .hook = ip_conntrack_defrag, - .owner = THIS_MODULE, - .pf = PF_INET, - .hooknum = NF_IP_PRE_ROUTING, - .priority = NF_IP_PRI_CONNTRACK_DEFRAG, - }, - { - .hook = ip_conntrack_in, - .owner = THIS_MODULE, - .pf = PF_INET, - .hooknum = NF_IP_PRE_ROUTING, - .priority = NF_IP_PRI_CONNTRACK, - }, - { - .hook = ip_conntrack_defrag, - .owner = THIS_MODULE, - .pf = PF_INET, - .hooknum = NF_IP_LOCAL_OUT, - .priority = NF_IP_PRI_CONNTRACK_DEFRAG, - }, - { - .hook = ip_conntrack_local, - .owner = THIS_MODULE, - .pf = PF_INET, - .hooknum = NF_IP_LOCAL_OUT, - .priority = NF_IP_PRI_CONNTRACK, - }, - { - .hook = ip_conntrack_help, - .owner = THIS_MODULE, - .pf = PF_INET, - .hooknum = NF_IP_POST_ROUTING, - .priority = NF_IP_PRI_CONNTRACK_HELPER, - }, - { - .hook = ip_conntrack_help, - .owner = THIS_MODULE, - .pf = PF_INET, - .hooknum = NF_IP_LOCAL_IN, - .priority = NF_IP_PRI_CONNTRACK_HELPER, - }, - { - .hook = ip_confirm, - .owner = THIS_MODULE, - .pf = PF_INET, - .hooknum = NF_IP_POST_ROUTING, - .priority = NF_IP_PRI_CONNTRACK_CONFIRM, - }, - { - .hook = ip_confirm, - .owner = THIS_MODULE, - .pf = PF_INET, - .hooknum = NF_IP_LOCAL_IN, - .priority = NF_IP_PRI_CONNTRACK_CONFIRM, - }, -}; - -/* Sysctl support */ - -int ip_conntrack_checksum __read_mostly = 1; - -#ifdef CONFIG_SYSCTL - -/* From ip_conntrack_core.c */ -extern int ip_conntrack_max; -extern unsigned int ip_conntrack_htable_size; - -/* From ip_conntrack_proto_tcp.c */ -extern unsigned int ip_ct_tcp_timeout_syn_sent; -extern unsigned int ip_ct_tcp_timeout_syn_recv; -extern unsigned int ip_ct_tcp_timeout_established; -extern unsigned int ip_ct_tcp_timeout_fin_wait; -extern unsigned int ip_ct_tcp_timeout_close_wait; -extern unsigned int ip_ct_tcp_timeout_last_ack; -extern unsigned int ip_ct_tcp_timeout_time_wait; -extern unsigned int ip_ct_tcp_timeout_close; -extern unsigned int ip_ct_tcp_timeout_max_retrans; -extern int ip_ct_tcp_loose; -extern int ip_ct_tcp_be_liberal; -extern int ip_ct_tcp_max_retrans; - -/* From ip_conntrack_proto_udp.c */ -extern unsigned int ip_ct_udp_timeout; -extern unsigned int ip_ct_udp_timeout_stream; - -/* From ip_conntrack_proto_icmp.c */ -extern unsigned int ip_ct_icmp_timeout; - -/* From ip_conntrack_proto_generic.c */ -extern unsigned int ip_ct_generic_timeout; - -/* Log invalid packets of a given protocol */ -static int log_invalid_proto_min = 0; -static int log_invalid_proto_max = 255; - -static struct ctl_table_header *ip_ct_sysctl_header; - -static ctl_table ip_ct_sysctl_table[] = { - { - .ctl_name = NET_IPV4_NF_CONNTRACK_MAX, - .procname = "ip_conntrack_max", - .data = &ip_conntrack_max, - .maxlen = sizeof(int), - .mode = 0644, - .proc_handler = &proc_dointvec, - }, - { - .ctl_name = NET_IPV4_NF_CONNTRACK_COUNT, - .procname = "ip_conntrack_count", - .data = &ip_conntrack_count, - .maxlen = sizeof(int), - .mode = 0444, - .proc_handler = &proc_dointvec, - }, - { - .ctl_name = NET_IPV4_NF_CONNTRACK_BUCKETS, - .procname = "ip_conntrack_buckets", - .data = &ip_conntrack_htable_size, - .maxlen = sizeof(unsigned int), - .mode = 0444, - .proc_handler = &proc_dointvec, - }, - { - .ctl_name = NET_IPV4_NF_CONNTRACK_CHECKSUM, - .procname = "ip_conntrack_checksum", - .data = &ip_conntrack_checksum, - .maxlen = sizeof(int), - .mode = 0644, - .proc_handler = &proc_dointvec, - }, - { - .ctl_name = NET_IPV4_NF_CONNTRACK_TCP_TIMEOUT_SYN_SENT, - .procname = "ip_conntrack_tcp_timeout_syn_sent", - .data = &ip_ct_tcp_timeout_syn_sent, - .maxlen = sizeof(unsigned int), - .mode = 0644, - .proc_handler = &proc_dointvec_jiffies, - }, - { - .ctl_name = NET_IPV4_NF_CONNTRACK_TCP_TIMEOUT_SYN_RECV, - .procname = "ip_conntrack_tcp_timeout_syn_recv", - .data = &ip_ct_tcp_timeout_syn_recv, - .maxlen = sizeof(unsigned int), - .mode = 0644, - .proc_handler = &proc_dointvec_jiffies, - }, - { - .ctl_name = NET_IPV4_NF_CONNTRACK_TCP_TIMEOUT_ESTABLISHED, - .procname = "ip_conntrack_tcp_timeout_established", - .data = &ip_ct_tcp_timeout_established, - .maxlen = sizeof(unsigned int), - .mode = 0644, - .proc_handler = &proc_dointvec_jiffies, - }, - { - .ctl_name = NET_IPV4_NF_CONNTRACK_TCP_TIMEOUT_FIN_WAIT, - .procname = "ip_conntrack_tcp_timeout_fin_wait", - .data = &ip_ct_tcp_timeout_fin_wait, - .maxlen = sizeof(unsigned int), - .mode = 0644, - .proc_handler = &proc_dointvec_jiffies, - }, - { - .ctl_name = NET_IPV4_NF_CONNTRACK_TCP_TIMEOUT_CLOSE_WAIT, - .procname = "ip_conntrack_tcp_timeout_close_wait", - .data = &ip_ct_tcp_timeout_close_wait, - .maxlen = sizeof(unsigned int), - .mode = 0644, - .proc_handler = &proc_dointvec_jiffies, - }, - { - .ctl_name = NET_IPV4_NF_CONNTRACK_TCP_TIMEOUT_LAST_ACK, - .procname = "ip_conntrack_tcp_timeout_last_ack", - .data = &ip_ct_tcp_timeout_last_ack, - .maxlen = sizeof(unsigned int), - .mode = 0644, - .proc_handler = &proc_dointvec_jiffies, - }, - { - .ctl_name = NET_IPV4_NF_CONNTRACK_TCP_TIMEOUT_TIME_WAIT, - .procname = "ip_conntrack_tcp_timeout_time_wait", - .data = &ip_ct_tcp_timeout_time_wait, - .maxlen = sizeof(unsigned int), - .mode = 0644, - .proc_handler = &proc_dointvec_jiffies, - }, - { - .ctl_name = NET_IPV4_NF_CONNTRACK_TCP_TIMEOUT_CLOSE, - .procname = "ip_conntrack_tcp_timeout_close", - .data = &ip_ct_tcp_timeout_close, - .maxlen = sizeof(unsigned int), - .mode = 0644, - .proc_handler = &proc_dointvec_jiffies, - }, - { - .ctl_name = NET_IPV4_NF_CONNTRACK_UDP_TIMEOUT, - .procname = "ip_conntrack_udp_timeout", - .data = &ip_ct_udp_timeout, - .maxlen = sizeof(unsigned int), - .mode = 0644, - .proc_handler = &proc_dointvec_jiffies, - }, - { - .ctl_name = NET_IPV4_NF_CONNTRACK_UDP_TIMEOUT_STREAM, - .procname = "ip_conntrack_udp_timeout_stream", - .data = &ip_ct_udp_timeout_stream, - .maxlen = sizeof(unsigned int), - .mode = 0644, - .proc_handler = &proc_dointvec_jiffies, - }, - { - .ctl_name = NET_IPV4_NF_CONNTRACK_ICMP_TIMEOUT, - .procname = "ip_conntrack_icmp_timeout", - .data = &ip_ct_icmp_timeout, - .maxlen = sizeof(unsigned int), - .mode = 0644, - .proc_handler = &proc_dointvec_jiffies, - }, - { - .ctl_name = NET_IPV4_NF_CONNTRACK_GENERIC_TIMEOUT, - .procname = "ip_conntrack_generic_timeout", - .data = &ip_ct_generic_timeout, - .maxlen = sizeof(unsigned int), - .mode = 0644, - .proc_handler = &proc_dointvec_jiffies, - }, - { - .ctl_name = NET_IPV4_NF_CONNTRACK_LOG_INVALID, - .procname = "ip_conntrack_log_invalid", - .data = &ip_ct_log_invalid, - .maxlen = sizeof(unsigned int), - .mode = 0644, - .proc_handler = &proc_dointvec_minmax, - .strategy = &sysctl_intvec, - .extra1 = &log_invalid_proto_min, - .extra2 = &log_invalid_proto_max, - }, - { - .ctl_name = NET_IPV4_NF_CONNTRACK_TCP_TIMEOUT_MAX_RETRANS, - .procname = "ip_conntrack_tcp_timeout_max_retrans", - .data = &ip_ct_tcp_timeout_max_retrans, - .maxlen = sizeof(unsigned int), - .mode = 0644, - .proc_handler = &proc_dointvec_jiffies, - }, - { - .ctl_name = NET_IPV4_NF_CONNTRACK_TCP_LOOSE, - .procname = "ip_conntrack_tcp_loose", - .data = &ip_ct_tcp_loose, - .maxlen = sizeof(unsigned int), - .mode = 0644, - .proc_handler = &proc_dointvec, - }, - { - .ctl_name = NET_IPV4_NF_CONNTRACK_TCP_BE_LIBERAL, - .procname = "ip_conntrack_tcp_be_liberal", - .data = &ip_ct_tcp_be_liberal, - .maxlen = sizeof(unsigned int), - .mode = 0644, - .proc_handler = &proc_dointvec, - }, - { - .ctl_name = NET_IPV4_NF_CONNTRACK_TCP_MAX_RETRANS, - .procname = "ip_conntrack_tcp_max_retrans", - .data = &ip_ct_tcp_max_retrans, - .maxlen = sizeof(unsigned int), - .mode = 0644, - .proc_handler = &proc_dointvec, - }, - { .ctl_name = 0 } -}; - -#define NET_IP_CONNTRACK_MAX 2089 - -static ctl_table ip_ct_netfilter_table[] = { - { - .ctl_name = NET_IPV4_NETFILTER, - .procname = "netfilter", - .mode = 0555, - .child = ip_ct_sysctl_table, - }, - { - .ctl_name = NET_IP_CONNTRACK_MAX, - .procname = "ip_conntrack_max", - .data = &ip_conntrack_max, - .maxlen = sizeof(int), - .mode = 0644, - .proc_handler = &proc_dointvec - }, - { .ctl_name = 0 } -}; - -static ctl_table ip_ct_ipv4_table[] = { - { - .ctl_name = NET_IPV4, - .procname = "ipv4", - .mode = 0555, - .child = ip_ct_netfilter_table, - }, - { .ctl_name = 0 } -}; - -static ctl_table ip_ct_net_table[] = { - { - .ctl_name = CTL_NET, - .procname = "net", - .mode = 0555, - .child = ip_ct_ipv4_table, - }, - { .ctl_name = 0 } -}; - -EXPORT_SYMBOL(ip_ct_log_invalid); -#endif /* CONFIG_SYSCTL */ - -/* FIXME: Allow NULL functions and sub in pointers to generic for - them. --RR */ -int ip_conntrack_protocol_register(struct ip_conntrack_protocol *proto) -{ - int ret = 0; - - write_lock_bh(&ip_conntrack_lock); - if (ip_ct_protos[proto->proto] != &ip_conntrack_generic_protocol) { - ret = -EBUSY; - goto out; - } - rcu_assign_pointer(ip_ct_protos[proto->proto], proto); - out: - write_unlock_bh(&ip_conntrack_lock); - return ret; -} - -void ip_conntrack_protocol_unregister(struct ip_conntrack_protocol *proto) -{ - write_lock_bh(&ip_conntrack_lock); - rcu_assign_pointer(ip_ct_protos[proto->proto], - &ip_conntrack_generic_protocol); - write_unlock_bh(&ip_conntrack_lock); - synchronize_rcu(); - - /* Remove all contrack entries for this protocol */ - ip_ct_iterate_cleanup(kill_proto, &proto->proto); -} - -static int __init ip_conntrack_standalone_init(void) -{ -#ifdef CONFIG_PROC_FS - struct proc_dir_entry *proc, *proc_exp, *proc_stat; -#endif - int ret = 0; - - ret = ip_conntrack_init(); - if (ret < 0) - return ret; - -#ifdef CONFIG_PROC_FS - ret = -ENOMEM; - proc = proc_net_fops_create("ip_conntrack", 0440, &ct_file_ops); - if (!proc) goto cleanup_init; - - proc_exp = proc_net_fops_create("ip_conntrack_expect", 0440, - &exp_file_ops); - if (!proc_exp) goto cleanup_proc; - - proc_stat = create_proc_entry("ip_conntrack", S_IRUGO, proc_net_stat); - if (!proc_stat) - goto cleanup_proc_exp; - - proc_stat->proc_fops = &ct_cpu_seq_fops; - proc_stat->owner = THIS_MODULE; -#endif - - ret = nf_register_hooks(ip_conntrack_ops, ARRAY_SIZE(ip_conntrack_ops)); - if (ret < 0) { - printk("ip_conntrack: can't register hooks.\n"); - goto cleanup_proc_stat; - } -#ifdef CONFIG_SYSCTL - ip_ct_sysctl_header = register_sysctl_table(ip_ct_net_table); - if (ip_ct_sysctl_header == NULL) { - printk("ip_conntrack: can't register to sysctl.\n"); - ret = -ENOMEM; - goto cleanup_hooks; - } -#endif - return ret; - -#ifdef CONFIG_SYSCTL - cleanup_hooks: - nf_unregister_hooks(ip_conntrack_ops, ARRAY_SIZE(ip_conntrack_ops)); -#endif - cleanup_proc_stat: -#ifdef CONFIG_PROC_FS - remove_proc_entry("ip_conntrack", proc_net_stat); - cleanup_proc_exp: - proc_net_remove("ip_conntrack_expect"); - cleanup_proc: - proc_net_remove("ip_conntrack"); - cleanup_init: -#endif /* CONFIG_PROC_FS */ - ip_conntrack_cleanup(); - return ret; -} - -static void __exit ip_conntrack_standalone_fini(void) -{ - synchronize_net(); -#ifdef CONFIG_SYSCTL - unregister_sysctl_table(ip_ct_sysctl_header); -#endif - nf_unregister_hooks(ip_conntrack_ops, ARRAY_SIZE(ip_conntrack_ops)); -#ifdef CONFIG_PROC_FS - remove_proc_entry("ip_conntrack", proc_net_stat); - proc_net_remove("ip_conntrack_expect"); - proc_net_remove("ip_conntrack"); -#endif /* CONFIG_PROC_FS */ - ip_conntrack_cleanup(); -} - -module_init(ip_conntrack_standalone_init); -module_exit(ip_conntrack_standalone_fini); - -/* Some modules need us, but don't depend directly on any symbol. - They should call this. */ -void need_conntrack(void) -{ -} - -#ifdef CONFIG_IP_NF_CONNTRACK_EVENTS -EXPORT_SYMBOL_GPL(ip_conntrack_chain); -EXPORT_SYMBOL_GPL(ip_conntrack_expect_chain); -EXPORT_SYMBOL_GPL(ip_conntrack_register_notifier); -EXPORT_SYMBOL_GPL(ip_conntrack_unregister_notifier); -EXPORT_SYMBOL_GPL(__ip_ct_event_cache_init); -EXPORT_PER_CPU_SYMBOL_GPL(ip_conntrack_ecache); -#endif -EXPORT_SYMBOL(ip_conntrack_protocol_register); -EXPORT_SYMBOL(ip_conntrack_protocol_unregister); -EXPORT_SYMBOL(ip_ct_get_tuple); -EXPORT_SYMBOL(invert_tuplepr); -EXPORT_SYMBOL(ip_conntrack_alter_reply); -EXPORT_SYMBOL(ip_conntrack_destroyed); -EXPORT_SYMBOL(need_conntrack); -EXPORT_SYMBOL(ip_conntrack_helper_register); -EXPORT_SYMBOL(ip_conntrack_helper_unregister); -EXPORT_SYMBOL(ip_ct_iterate_cleanup); -EXPORT_SYMBOL(__ip_ct_refresh_acct); - -EXPORT_SYMBOL(ip_conntrack_expect_alloc); -EXPORT_SYMBOL(ip_conntrack_expect_put); -EXPORT_SYMBOL_GPL(__ip_conntrack_expect_find); -EXPORT_SYMBOL_GPL(ip_conntrack_expect_find_get); -EXPORT_SYMBOL(ip_conntrack_expect_related); -EXPORT_SYMBOL(ip_conntrack_unexpect_related); -EXPORT_SYMBOL_GPL(ip_conntrack_expect_list); -EXPORT_SYMBOL_GPL(ip_ct_unlink_expect); - -EXPORT_SYMBOL(ip_conntrack_tuple_taken); -EXPORT_SYMBOL(ip_ct_gather_frags); -EXPORT_SYMBOL(ip_conntrack_htable_size); -EXPORT_SYMBOL(ip_conntrack_lock); -EXPORT_SYMBOL(ip_conntrack_hash); -EXPORT_SYMBOL(ip_conntrack_untracked); -EXPORT_SYMBOL_GPL(ip_conntrack_find_get); -#ifdef CONFIG_IP_NF_NAT_NEEDED -EXPORT_SYMBOL(ip_conntrack_tcp_update); -#endif - -EXPORT_SYMBOL_GPL(ip_conntrack_flush); -EXPORT_SYMBOL_GPL(__ip_conntrack_find); - -EXPORT_SYMBOL_GPL(ip_conntrack_alloc); -EXPORT_SYMBOL_GPL(ip_conntrack_free); -EXPORT_SYMBOL_GPL(ip_conntrack_hash_insert); - -EXPORT_SYMBOL_GPL(ip_ct_remove_expectations); - -EXPORT_SYMBOL_GPL(ip_conntrack_helper_find_get); -EXPORT_SYMBOL_GPL(ip_conntrack_helper_put); -EXPORT_SYMBOL_GPL(__ip_conntrack_helper_find_byname); - -EXPORT_SYMBOL_GPL(ip_conntrack_proto_find_get); -EXPORT_SYMBOL_GPL(ip_conntrack_proto_put); -EXPORT_SYMBOL_GPL(__ip_conntrack_proto_find); -EXPORT_SYMBOL_GPL(ip_conntrack_checksum); -#if defined(CONFIG_IP_NF_CONNTRACK_NETLINK) || \ - defined(CONFIG_IP_NF_CONNTRACK_NETLINK_MODULE) -EXPORT_SYMBOL_GPL(ip_ct_port_tuple_to_nfattr); -EXPORT_SYMBOL_GPL(ip_ct_port_nfattr_to_tuple); -#endif diff -puN net/ipv4/netfilter/ip_conntrack_tftp.c~git-net /dev/null --- a/net/ipv4/netfilter/ip_conntrack_tftp.c +++ /dev/null @@ -1,161 +0,0 @@ -/* (C) 2001-2002 Magnus Boden - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - * - * Version: 0.0.7 - * - * Thu 21 Mar 2002 Harald Welte - * - port to newnat API - * - */ - -#include -#include -#include - -#include -#include -#include -#include -#include - -MODULE_AUTHOR("Magnus Boden "); -MODULE_DESCRIPTION("tftp connection tracking helper"); -MODULE_LICENSE("GPL"); - -#define MAX_PORTS 8 -static unsigned short ports[MAX_PORTS]; -static int ports_c; -module_param_array(ports, ushort, &ports_c, 0400); -MODULE_PARM_DESC(ports, "port numbers of tftp servers"); - -#if 0 -#define DEBUGP(format, args...) printk("%s:%s:" format, \ - __FILE__, __FUNCTION__ , ## args) -#else -#define DEBUGP(format, args...) -#endif - -unsigned int (*ip_nat_tftp_hook)(struct sk_buff **pskb, - enum ip_conntrack_info ctinfo, - struct ip_conntrack_expect *exp); -EXPORT_SYMBOL_GPL(ip_nat_tftp_hook); - -static int tftp_help(struct sk_buff **pskb, - struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo) -{ - struct tftphdr _tftph, *tfh; - struct ip_conntrack_expect *exp; - unsigned int ret = NF_ACCEPT; - typeof(ip_nat_tftp_hook) ip_nat_tftp; - - tfh = skb_header_pointer(*pskb, - (*pskb)->nh.iph->ihl*4+sizeof(struct udphdr), - sizeof(_tftph), &_tftph); - if (tfh == NULL) - return NF_ACCEPT; - - switch (ntohs(tfh->opcode)) { - /* RRQ and WRQ works the same way */ - case TFTP_OPCODE_READ: - case TFTP_OPCODE_WRITE: - DEBUGP(""); - DUMP_TUPLE(&ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple); - DUMP_TUPLE(&ct->tuplehash[IP_CT_DIR_REPLY].tuple); - - exp = ip_conntrack_expect_alloc(ct); - if (exp == NULL) - return NF_DROP; - - exp->tuple = ct->tuplehash[IP_CT_DIR_REPLY].tuple; - exp->mask.src.ip = htonl(0xffffffff); - exp->mask.src.u.udp.port = 0; - exp->mask.dst.ip = htonl(0xffffffff); - exp->mask.dst.u.udp.port = htons(0xffff); - exp->mask.dst.protonum = 0xff; - exp->expectfn = NULL; - exp->flags = 0; - - DEBUGP("expect: "); - DUMP_TUPLE(&exp->tuple); - DUMP_TUPLE(&exp->mask); - ip_nat_tftp = rcu_dereference(ip_nat_tftp_hook); - if (ip_nat_tftp) - ret = ip_nat_tftp(pskb, ctinfo, exp); - else if (ip_conntrack_expect_related(exp) != 0) - ret = NF_DROP; - ip_conntrack_expect_put(exp); - break; - case TFTP_OPCODE_DATA: - case TFTP_OPCODE_ACK: - DEBUGP("Data/ACK opcode\n"); - break; - case TFTP_OPCODE_ERROR: - DEBUGP("Error opcode\n"); - break; - default: - DEBUGP("Unknown opcode\n"); - } - return NF_ACCEPT; -} - -static struct ip_conntrack_helper tftp[MAX_PORTS]; -static char tftp_names[MAX_PORTS][sizeof("tftp-65535")]; - -static void ip_conntrack_tftp_fini(void) -{ - int i; - - for (i = 0 ; i < ports_c; i++) { - DEBUGP("unregistering helper for port %d\n", - ports[i]); - ip_conntrack_helper_unregister(&tftp[i]); - } -} - -static int __init ip_conntrack_tftp_init(void) -{ - int i, ret; - char *tmpname; - - if (ports_c == 0) - ports[ports_c++] = TFTP_PORT; - - for (i = 0; i < ports_c; i++) { - /* Create helper structure */ - memset(&tftp[i], 0, sizeof(struct ip_conntrack_helper)); - - tftp[i].tuple.dst.protonum = IPPROTO_UDP; - tftp[i].tuple.src.u.udp.port = htons(ports[i]); - tftp[i].mask.dst.protonum = 0xFF; - tftp[i].mask.src.u.udp.port = htons(0xFFFF); - tftp[i].max_expected = 1; - tftp[i].timeout = 5 * 60; /* 5 minutes */ - tftp[i].me = THIS_MODULE; - tftp[i].help = tftp_help; - - tmpname = &tftp_names[i][0]; - if (ports[i] == TFTP_PORT) - sprintf(tmpname, "tftp"); - else - sprintf(tmpname, "tftp-%d", i); - tftp[i].name = tmpname; - - DEBUGP("port #%d: %d\n", i, ports[i]); - - ret=ip_conntrack_helper_register(&tftp[i]); - if (ret) { - printk("ERROR registering helper for port %d\n", - ports[i]); - ip_conntrack_tftp_fini(); - return(ret); - } - } - return(0); -} - -module_init(ip_conntrack_tftp_init); -module_exit(ip_conntrack_tftp_fini); diff -puN net/ipv4/netfilter/ip_nat_amanda.c~git-net /dev/null --- a/net/ipv4/netfilter/ip_nat_amanda.c +++ /dev/null @@ -1,85 +0,0 @@ -/* Amanda extension for TCP NAT alteration. - * (C) 2002 by Brian J. Murrell - * based on a copy of HW's ip_nat_irc.c as well as other modules - * - * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License - * as published by the Free Software Foundation; either version - * 2 of the License, or (at your option) any later version. - * - * Module load syntax: - * insmod ip_nat_amanda.o - */ - -#include -#include -#include -#include -#include -#include -#include -#include - -#include -#include -#include -#include -#include - - -MODULE_AUTHOR("Brian J. Murrell "); -MODULE_DESCRIPTION("Amanda NAT helper"); -MODULE_LICENSE("GPL"); - -static unsigned int help(struct sk_buff **pskb, - enum ip_conntrack_info ctinfo, - unsigned int matchoff, - unsigned int matchlen, - struct ip_conntrack_expect *exp) -{ - char buffer[sizeof("65535")]; - u_int16_t port; - unsigned int ret; - - /* Connection comes from client. */ - exp->saved_proto.tcp.port = exp->tuple.dst.u.tcp.port; - exp->dir = IP_CT_DIR_ORIGINAL; - - /* When you see the packet, we need to NAT it the same as the - * this one (ie. same IP: it will be TCP and master is UDP). */ - exp->expectfn = ip_nat_follow_master; - - /* Try to get same port: if not, try to change it. */ - for (port = ntohs(exp->saved_proto.tcp.port); port != 0; port++) { - exp->tuple.dst.u.tcp.port = htons(port); - if (ip_conntrack_expect_related(exp) == 0) - break; - } - - if (port == 0) - return NF_DROP; - - sprintf(buffer, "%u", port); - ret = ip_nat_mangle_udp_packet(pskb, exp->master, ctinfo, - matchoff, matchlen, - buffer, strlen(buffer)); - if (ret != NF_ACCEPT) - ip_conntrack_unexpect_related(exp); - return ret; -} - -static void __exit ip_nat_amanda_fini(void) -{ - rcu_assign_pointer(ip_nat_amanda_hook, NULL); - synchronize_rcu(); -} - -static int __init ip_nat_amanda_init(void) -{ - BUG_ON(rcu_dereference(ip_nat_amanda_hook)); - rcu_assign_pointer(ip_nat_amanda_hook, help); - return 0; -} - -module_init(ip_nat_amanda_init); -module_exit(ip_nat_amanda_fini); diff -puN net/ipv4/netfilter/ip_nat_core.c~git-net /dev/null --- a/net/ipv4/netfilter/ip_nat_core.c +++ /dev/null @@ -1,634 +0,0 @@ -/* NAT for netfilter; shared with compatibility layer. */ - -/* (C) 1999-2001 Paul `Rusty' Russell - * (C) 2002-2004 Netfilter Core Team - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - */ - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include /* For tcp_prot in getorigdst */ -#include -#include -#include - -#include -#include -#include -#include -#include -#include -#include -#include - -#if 0 -#define DEBUGP printk -#else -#define DEBUGP(format, args...) -#endif - -DEFINE_RWLOCK(ip_nat_lock); - -/* Calculated at init based on memory size */ -static unsigned int ip_nat_htable_size; - -static struct list_head *bysource; - -#define MAX_IP_NAT_PROTO 256 -static struct ip_nat_protocol *ip_nat_protos[MAX_IP_NAT_PROTO]; - -static inline struct ip_nat_protocol * -__ip_nat_proto_find(u_int8_t protonum) -{ - return rcu_dereference(ip_nat_protos[protonum]); -} - -struct ip_nat_protocol * -ip_nat_proto_find_get(u_int8_t protonum) -{ - struct ip_nat_protocol *p; - - rcu_read_lock(); - p = __ip_nat_proto_find(protonum); - if (!try_module_get(p->me)) - p = &ip_nat_unknown_protocol; - rcu_read_unlock(); - - return p; -} -EXPORT_SYMBOL_GPL(ip_nat_proto_find_get); - -void -ip_nat_proto_put(struct ip_nat_protocol *p) -{ - module_put(p->me); -} -EXPORT_SYMBOL_GPL(ip_nat_proto_put); - -/* We keep an extra hash for each conntrack, for fast searching. */ -static inline unsigned int -hash_by_src(const struct ip_conntrack_tuple *tuple) -{ - /* Original src, to ensure we map it consistently if poss. */ - return jhash_3words((__force u32)tuple->src.ip, tuple->src.u.all, - tuple->dst.protonum, 0) % ip_nat_htable_size; -} - -/* Noone using conntrack by the time this called. */ -static void ip_nat_cleanup_conntrack(struct ip_conntrack *conn) -{ - if (!(conn->status & IPS_NAT_DONE_MASK)) - return; - - write_lock_bh(&ip_nat_lock); - list_del(&conn->nat.info.bysource); - write_unlock_bh(&ip_nat_lock); -} - -/* Is this tuple already taken? (not by us) */ -int -ip_nat_used_tuple(const struct ip_conntrack_tuple *tuple, - const struct ip_conntrack *ignored_conntrack) -{ - /* Conntrack tracking doesn't keep track of outgoing tuples; only - incoming ones. NAT means they don't have a fixed mapping, - so we invert the tuple and look for the incoming reply. - - We could keep a separate hash if this proves too slow. */ - struct ip_conntrack_tuple reply; - - invert_tuplepr(&reply, tuple); - return ip_conntrack_tuple_taken(&reply, ignored_conntrack); -} -EXPORT_SYMBOL(ip_nat_used_tuple); - -/* If we source map this tuple so reply looks like reply_tuple, will - * that meet the constraints of range. */ -static int -in_range(const struct ip_conntrack_tuple *tuple, - const struct ip_nat_range *range) -{ - struct ip_nat_protocol *proto; - int ret = 0; - - /* If we are supposed to map IPs, then we must be in the - range specified, otherwise let this drag us onto a new src IP. */ - if (range->flags & IP_NAT_RANGE_MAP_IPS) { - if (ntohl(tuple->src.ip) < ntohl(range->min_ip) - || ntohl(tuple->src.ip) > ntohl(range->max_ip)) - return 0; - } - - rcu_read_lock(); - proto = __ip_nat_proto_find(tuple->dst.protonum); - if (!(range->flags & IP_NAT_RANGE_PROTO_SPECIFIED) - || proto->in_range(tuple, IP_NAT_MANIP_SRC, - &range->min, &range->max)) - ret = 1; - rcu_read_unlock(); - - return ret; -} - -static inline int -same_src(const struct ip_conntrack *ct, - const struct ip_conntrack_tuple *tuple) -{ - return (ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.dst.protonum - == tuple->dst.protonum - && ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.src.ip - == tuple->src.ip - && ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.src.u.all - == tuple->src.u.all); -} - -/* Only called for SRC manip */ -static int -find_appropriate_src(const struct ip_conntrack_tuple *tuple, - struct ip_conntrack_tuple *result, - const struct ip_nat_range *range) -{ - unsigned int h = hash_by_src(tuple); - struct ip_conntrack *ct; - - read_lock_bh(&ip_nat_lock); - list_for_each_entry(ct, &bysource[h], nat.info.bysource) { - if (same_src(ct, tuple)) { - /* Copy source part from reply tuple. */ - invert_tuplepr(result, - &ct->tuplehash[IP_CT_DIR_REPLY].tuple); - result->dst = tuple->dst; - - if (in_range(result, range)) { - read_unlock_bh(&ip_nat_lock); - return 1; - } - } - } - read_unlock_bh(&ip_nat_lock); - return 0; -} - -/* For [FUTURE] fragmentation handling, we want the least-used - src-ip/dst-ip/proto triple. Fairness doesn't come into it. Thus - if the range specifies 1.2.3.4 ports 10000-10005 and 1.2.3.5 ports - 1-65535, we don't do pro-rata allocation based on ports; we choose - the ip with the lowest src-ip/dst-ip/proto usage. -*/ -static void -find_best_ips_proto(struct ip_conntrack_tuple *tuple, - const struct ip_nat_range *range, - const struct ip_conntrack *conntrack, - enum ip_nat_manip_type maniptype) -{ - __be32 *var_ipp; - /* Host order */ - u_int32_t minip, maxip, j; - - /* No IP mapping? Do nothing. */ - if (!(range->flags & IP_NAT_RANGE_MAP_IPS)) - return; - - if (maniptype == IP_NAT_MANIP_SRC) - var_ipp = &tuple->src.ip; - else - var_ipp = &tuple->dst.ip; - - /* Fast path: only one choice. */ - if (range->min_ip == range->max_ip) { - *var_ipp = range->min_ip; - return; - } - - /* Hashing source and destination IPs gives a fairly even - * spread in practice (if there are a small number of IPs - * involved, there usually aren't that many connections - * anyway). The consistency means that servers see the same - * client coming from the same IP (some Internet Banking sites - * like this), even across reboots. */ - minip = ntohl(range->min_ip); - maxip = ntohl(range->max_ip); - j = jhash_2words((__force u32)tuple->src.ip, (__force u32)tuple->dst.ip, 0); - *var_ipp = htonl(minip + j % (maxip - minip + 1)); -} - -/* Manipulate the tuple into the range given. For NF_IP_POST_ROUTING, - * we change the source to map into the range. For NF_IP_PRE_ROUTING - * and NF_IP_LOCAL_OUT, we change the destination to map into the - * range. It might not be possible to get a unique tuple, but we try. - * At worst (or if we race), we will end up with a final duplicate in - * __ip_conntrack_confirm and drop the packet. */ -static void -get_unique_tuple(struct ip_conntrack_tuple *tuple, - const struct ip_conntrack_tuple *orig_tuple, - const struct ip_nat_range *range, - struct ip_conntrack *conntrack, - enum ip_nat_manip_type maniptype) -{ - struct ip_nat_protocol *proto; - - /* 1) If this srcip/proto/src-proto-part is currently mapped, - and that same mapping gives a unique tuple within the given - range, use that. - - This is only required for source (ie. NAT/masq) mappings. - So far, we don't do local source mappings, so multiple - manips not an issue. */ - if (maniptype == IP_NAT_MANIP_SRC) { - if (find_appropriate_src(orig_tuple, tuple, range)) { - DEBUGP("get_unique_tuple: Found current src map\n"); - if (!(range->flags & IP_NAT_RANGE_PROTO_RANDOM)) - if (!ip_nat_used_tuple(tuple, conntrack)) - return; - } - } - - /* 2) Select the least-used IP/proto combination in the given - range. */ - *tuple = *orig_tuple; - find_best_ips_proto(tuple, range, conntrack, maniptype); - - /* 3) The per-protocol part of the manip is made to map into - the range to make a unique tuple. */ - - rcu_read_lock(); - proto = __ip_nat_proto_find(orig_tuple->dst.protonum); - - /* Change protocol info to have some randomization */ - if (range->flags & IP_NAT_RANGE_PROTO_RANDOM) { - proto->unique_tuple(tuple, range, maniptype, conntrack); - goto out; - } - - /* Only bother mapping if it's not already in range and unique */ - if ((!(range->flags & IP_NAT_RANGE_PROTO_SPECIFIED) - || proto->in_range(tuple, maniptype, &range->min, &range->max)) - && !ip_nat_used_tuple(tuple, conntrack)) - goto out; - - /* Last change: get protocol to try to obtain unique tuple. */ - proto->unique_tuple(tuple, range, maniptype, conntrack); -out: - rcu_read_unlock(); -} - -unsigned int -ip_nat_setup_info(struct ip_conntrack *conntrack, - const struct ip_nat_range *range, - unsigned int hooknum) -{ - struct ip_conntrack_tuple curr_tuple, new_tuple; - struct ip_nat_info *info = &conntrack->nat.info; - int have_to_hash = !(conntrack->status & IPS_NAT_DONE_MASK); - enum ip_nat_manip_type maniptype = HOOK2MANIP(hooknum); - - IP_NF_ASSERT(hooknum == NF_IP_PRE_ROUTING - || hooknum == NF_IP_POST_ROUTING - || hooknum == NF_IP_LOCAL_IN - || hooknum == NF_IP_LOCAL_OUT); - BUG_ON(ip_nat_initialized(conntrack, maniptype)); - - /* What we've got will look like inverse of reply. Normally - this is what is in the conntrack, except for prior - manipulations (future optimization: if num_manips == 0, - orig_tp = - conntrack->tuplehash[IP_CT_DIR_ORIGINAL].tuple) */ - invert_tuplepr(&curr_tuple, - &conntrack->tuplehash[IP_CT_DIR_REPLY].tuple); - - get_unique_tuple(&new_tuple, &curr_tuple, range, conntrack, maniptype); - - if (!ip_ct_tuple_equal(&new_tuple, &curr_tuple)) { - struct ip_conntrack_tuple reply; - - /* Alter conntrack table so will recognize replies. */ - invert_tuplepr(&reply, &new_tuple); - ip_conntrack_alter_reply(conntrack, &reply); - - /* Non-atomic: we own this at the moment. */ - if (maniptype == IP_NAT_MANIP_SRC) - conntrack->status |= IPS_SRC_NAT; - else - conntrack->status |= IPS_DST_NAT; - } - - /* Place in source hash if this is the first time. */ - if (have_to_hash) { - unsigned int srchash - = hash_by_src(&conntrack->tuplehash[IP_CT_DIR_ORIGINAL] - .tuple); - write_lock_bh(&ip_nat_lock); - list_add(&info->bysource, &bysource[srchash]); - write_unlock_bh(&ip_nat_lock); - } - - /* It's done. */ - if (maniptype == IP_NAT_MANIP_DST) - set_bit(IPS_DST_NAT_DONE_BIT, &conntrack->status); - else - set_bit(IPS_SRC_NAT_DONE_BIT, &conntrack->status); - - return NF_ACCEPT; -} -EXPORT_SYMBOL(ip_nat_setup_info); - -/* Returns true if succeeded. */ -static int -manip_pkt(u_int16_t proto, - struct sk_buff **pskb, - unsigned int iphdroff, - const struct ip_conntrack_tuple *target, - enum ip_nat_manip_type maniptype) -{ - struct iphdr *iph; - struct ip_nat_protocol *p; - - if (!skb_make_writable(pskb, iphdroff + sizeof(*iph))) - return 0; - - iph = (void *)(*pskb)->data + iphdroff; - - /* Manipulate protcol part. */ - - /* rcu_read_lock()ed by nf_hook_slow */ - p = __ip_nat_proto_find(proto); - if (!p->manip_pkt(pskb, iphdroff, target, maniptype)) - return 0; - - iph = (void *)(*pskb)->data + iphdroff; - - if (maniptype == IP_NAT_MANIP_SRC) { - nf_csum_replace4(&iph->check, iph->saddr, target->src.ip); - iph->saddr = target->src.ip; - } else { - nf_csum_replace4(&iph->check, iph->daddr, target->dst.ip); - iph->daddr = target->dst.ip; - } - return 1; -} - -/* Do packet manipulations according to ip_nat_setup_info. */ -unsigned int ip_nat_packet(struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - unsigned int hooknum, - struct sk_buff **pskb) -{ - enum ip_conntrack_dir dir = CTINFO2DIR(ctinfo); - unsigned long statusbit; - enum ip_nat_manip_type mtype = HOOK2MANIP(hooknum); - - if (mtype == IP_NAT_MANIP_SRC) - statusbit = IPS_SRC_NAT; - else - statusbit = IPS_DST_NAT; - - /* Invert if this is reply dir. */ - if (dir == IP_CT_DIR_REPLY) - statusbit ^= IPS_NAT_MASK; - - /* Non-atomic: these bits don't change. */ - if (ct->status & statusbit) { - struct ip_conntrack_tuple target; - - /* We are aiming to look like inverse of other direction. */ - invert_tuplepr(&target, &ct->tuplehash[!dir].tuple); - - if (!manip_pkt(target.dst.protonum, pskb, 0, &target, mtype)) - return NF_DROP; - } - return NF_ACCEPT; -} -EXPORT_SYMBOL_GPL(ip_nat_packet); - -/* Dir is direction ICMP is coming from (opposite to packet it contains) */ -int ip_nat_icmp_reply_translation(struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - unsigned int hooknum, - struct sk_buff **pskb) -{ - struct { - struct icmphdr icmp; - struct iphdr ip; - } *inside; - struct ip_conntrack_protocol *proto; - struct ip_conntrack_tuple inner, target; - int hdrlen = (*pskb)->nh.iph->ihl * 4; - enum ip_conntrack_dir dir = CTINFO2DIR(ctinfo); - unsigned long statusbit; - enum ip_nat_manip_type manip = HOOK2MANIP(hooknum); - - if (!skb_make_writable(pskb, hdrlen + sizeof(*inside))) - return 0; - - inside = (void *)(*pskb)->data + (*pskb)->nh.iph->ihl*4; - - /* We're actually going to mangle it beyond trivial checksum - adjustment, so make sure the current checksum is correct. */ - if (nf_ip_checksum(*pskb, hooknum, hdrlen, 0)) - return 0; - - /* Must be RELATED */ - IP_NF_ASSERT((*pskb)->nfctinfo == IP_CT_RELATED || - (*pskb)->nfctinfo == IP_CT_RELATED+IP_CT_IS_REPLY); - - /* Redirects on non-null nats must be dropped, else they'll - start talking to each other without our translation, and be - confused... --RR */ - if (inside->icmp.type == ICMP_REDIRECT) { - /* If NAT isn't finished, assume it and drop. */ - if ((ct->status & IPS_NAT_DONE_MASK) != IPS_NAT_DONE_MASK) - return 0; - - if (ct->status & IPS_NAT_MASK) - return 0; - } - - DEBUGP("icmp_reply_translation: translating error %p manp %u dir %s\n", - *pskb, manip, dir == IP_CT_DIR_ORIGINAL ? "ORIG" : "REPLY"); - - /* rcu_read_lock()ed by nf_hook_slow */ - proto = __ip_conntrack_proto_find(inside->ip.protocol); - if (!ip_ct_get_tuple(&inside->ip, *pskb, (*pskb)->nh.iph->ihl*4 + - sizeof(struct icmphdr) + inside->ip.ihl*4, - &inner, proto)) - return 0; - - /* Change inner back to look like incoming packet. We do the - opposite manip on this hook to normal, because it might not - pass all hooks (locally-generated ICMP). Consider incoming - packet: PREROUTING (DST manip), routing produces ICMP, goes - through POSTROUTING (which must correct the DST manip). */ - if (!manip_pkt(inside->ip.protocol, pskb, - (*pskb)->nh.iph->ihl*4 - + sizeof(inside->icmp), - &ct->tuplehash[!dir].tuple, - !manip)) - return 0; - - if ((*pskb)->ip_summed != CHECKSUM_PARTIAL) { - /* Reloading "inside" here since manip_pkt inner. */ - inside = (void *)(*pskb)->data + (*pskb)->nh.iph->ihl*4; - inside->icmp.checksum = 0; - inside->icmp.checksum = csum_fold(skb_checksum(*pskb, hdrlen, - (*pskb)->len - hdrlen, - 0)); - } - - /* Change outer to look the reply to an incoming packet - * (proto 0 means don't invert per-proto part). */ - if (manip == IP_NAT_MANIP_SRC) - statusbit = IPS_SRC_NAT; - else - statusbit = IPS_DST_NAT; - - /* Invert if this is reply dir. */ - if (dir == IP_CT_DIR_REPLY) - statusbit ^= IPS_NAT_MASK; - - if (ct->status & statusbit) { - invert_tuplepr(&target, &ct->tuplehash[!dir].tuple); - if (!manip_pkt(0, pskb, 0, &target, manip)) - return 0; - } - - return 1; -} -EXPORT_SYMBOL_GPL(ip_nat_icmp_reply_translation); - -/* Protocol registration. */ -int ip_nat_protocol_register(struct ip_nat_protocol *proto) -{ - int ret = 0; - - write_lock_bh(&ip_nat_lock); - if (ip_nat_protos[proto->protonum] != &ip_nat_unknown_protocol) { - ret = -EBUSY; - goto out; - } - rcu_assign_pointer(ip_nat_protos[proto->protonum], proto); - out: - write_unlock_bh(&ip_nat_lock); - return ret; -} -EXPORT_SYMBOL(ip_nat_protocol_register); - -/* Noone stores the protocol anywhere; simply delete it. */ -void ip_nat_protocol_unregister(struct ip_nat_protocol *proto) -{ - write_lock_bh(&ip_nat_lock); - rcu_assign_pointer(ip_nat_protos[proto->protonum], - &ip_nat_unknown_protocol); - write_unlock_bh(&ip_nat_lock); - synchronize_rcu(); -} -EXPORT_SYMBOL(ip_nat_protocol_unregister); - -#if defined(CONFIG_IP_NF_CONNTRACK_NETLINK) || \ - defined(CONFIG_IP_NF_CONNTRACK_NETLINK_MODULE) -int -ip_nat_port_range_to_nfattr(struct sk_buff *skb, - const struct ip_nat_range *range) -{ - NFA_PUT(skb, CTA_PROTONAT_PORT_MIN, sizeof(__be16), - &range->min.tcp.port); - NFA_PUT(skb, CTA_PROTONAT_PORT_MAX, sizeof(__be16), - &range->max.tcp.port); - - return 0; - -nfattr_failure: - return -1; -} - -int -ip_nat_port_nfattr_to_range(struct nfattr *tb[], struct ip_nat_range *range) -{ - int ret = 0; - - /* we have to return whether we actually parsed something or not */ - - if (tb[CTA_PROTONAT_PORT_MIN-1]) { - ret = 1; - range->min.tcp.port = - *(__be16 *)NFA_DATA(tb[CTA_PROTONAT_PORT_MIN-1]); - } - - if (!tb[CTA_PROTONAT_PORT_MAX-1]) { - if (ret) - range->max.tcp.port = range->min.tcp.port; - } else { - ret = 1; - range->max.tcp.port = - *(__be16 *)NFA_DATA(tb[CTA_PROTONAT_PORT_MAX-1]); - } - - return ret; -} -EXPORT_SYMBOL_GPL(ip_nat_port_nfattr_to_range); -EXPORT_SYMBOL_GPL(ip_nat_port_range_to_nfattr); -#endif - -static int __init ip_nat_init(void) -{ - size_t i; - - /* Leave them the same for the moment. */ - ip_nat_htable_size = ip_conntrack_htable_size; - - /* One vmalloc for both hash tables */ - bysource = vmalloc(sizeof(struct list_head) * ip_nat_htable_size); - if (!bysource) - return -ENOMEM; - - /* Sew in builtin protocols. */ - write_lock_bh(&ip_nat_lock); - for (i = 0; i < MAX_IP_NAT_PROTO; i++) - rcu_assign_pointer(ip_nat_protos[i], &ip_nat_unknown_protocol); - rcu_assign_pointer(ip_nat_protos[IPPROTO_TCP], &ip_nat_protocol_tcp); - rcu_assign_pointer(ip_nat_protos[IPPROTO_UDP], &ip_nat_protocol_udp); - rcu_assign_pointer(ip_nat_protos[IPPROTO_ICMP], &ip_nat_protocol_icmp); - write_unlock_bh(&ip_nat_lock); - - for (i = 0; i < ip_nat_htable_size; i++) { - INIT_LIST_HEAD(&bysource[i]); - } - - /* FIXME: Man, this is a hack. */ - IP_NF_ASSERT(rcu_dereference(ip_conntrack_destroyed) == NULL); - rcu_assign_pointer(ip_conntrack_destroyed, ip_nat_cleanup_conntrack); - - /* Initialize fake conntrack so that NAT will skip it */ - ip_conntrack_untracked.status |= IPS_NAT_DONE_MASK; - return 0; -} - -/* Clear NAT section of all conntracks, in case we're loaded again. */ -static int clean_nat(struct ip_conntrack *i, void *data) -{ - memset(&i->nat, 0, sizeof(i->nat)); - i->status &= ~(IPS_NAT_MASK | IPS_NAT_DONE_MASK | IPS_SEQ_ADJUST); - return 0; -} - -static void __exit ip_nat_cleanup(void) -{ - ip_ct_iterate_cleanup(&clean_nat, NULL); - rcu_assign_pointer(ip_conntrack_destroyed, NULL); - synchronize_rcu(); - vfree(bysource); -} - -MODULE_LICENSE("GPL"); - -module_init(ip_nat_init); -module_exit(ip_nat_cleanup); diff -puN net/ipv4/netfilter/ip_nat_ftp.c~git-net /dev/null --- a/net/ipv4/netfilter/ip_nat_ftp.c +++ /dev/null @@ -1,180 +0,0 @@ -/* FTP extension for TCP NAT alteration. */ - -/* (C) 1999-2001 Paul `Rusty' Russell - * (C) 2002-2004 Netfilter Core Team - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - */ - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -MODULE_LICENSE("GPL"); -MODULE_AUTHOR("Rusty Russell "); -MODULE_DESCRIPTION("ftp NAT helper"); - -#if 0 -#define DEBUGP printk -#else -#define DEBUGP(format, args...) -#endif - -/* FIXME: Time out? --RR */ - -static int -mangle_rfc959_packet(struct sk_buff **pskb, - __be32 newip, - u_int16_t port, - unsigned int matchoff, - unsigned int matchlen, - struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - u32 *seq) -{ - char buffer[sizeof("nnn,nnn,nnn,nnn,nnn,nnn")]; - - sprintf(buffer, "%u,%u,%u,%u,%u,%u", - NIPQUAD(newip), port>>8, port&0xFF); - - DEBUGP("calling ip_nat_mangle_tcp_packet\n"); - - *seq += strlen(buffer) - matchlen; - return ip_nat_mangle_tcp_packet(pskb, ct, ctinfo, matchoff, - matchlen, buffer, strlen(buffer)); -} - -/* |1|132.235.1.2|6275| */ -static int -mangle_eprt_packet(struct sk_buff **pskb, - __be32 newip, - u_int16_t port, - unsigned int matchoff, - unsigned int matchlen, - struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - u32 *seq) -{ - char buffer[sizeof("|1|255.255.255.255|65535|")]; - - sprintf(buffer, "|1|%u.%u.%u.%u|%u|", NIPQUAD(newip), port); - - DEBUGP("calling ip_nat_mangle_tcp_packet\n"); - - *seq += strlen(buffer) - matchlen; - return ip_nat_mangle_tcp_packet(pskb, ct, ctinfo, matchoff, - matchlen, buffer, strlen(buffer)); -} - -/* |1|132.235.1.2|6275| */ -static int -mangle_epsv_packet(struct sk_buff **pskb, - __be32 newip, - u_int16_t port, - unsigned int matchoff, - unsigned int matchlen, - struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - u32 *seq) -{ - char buffer[sizeof("|||65535|")]; - - sprintf(buffer, "|||%u|", port); - - DEBUGP("calling ip_nat_mangle_tcp_packet\n"); - - *seq += strlen(buffer) - matchlen; - return ip_nat_mangle_tcp_packet(pskb, ct, ctinfo, matchoff, - matchlen, buffer, strlen(buffer)); -} - -static int (*mangle[])(struct sk_buff **, __be32, u_int16_t, - unsigned int, - unsigned int, - struct ip_conntrack *, - enum ip_conntrack_info, - u32 *seq) -= { [IP_CT_FTP_PORT] = mangle_rfc959_packet, - [IP_CT_FTP_PASV] = mangle_rfc959_packet, - [IP_CT_FTP_EPRT] = mangle_eprt_packet, - [IP_CT_FTP_EPSV] = mangle_epsv_packet -}; - -/* So, this packet has hit the connection tracking matching code. - Mangle it, and change the expectation to match the new version. */ -static unsigned int ip_nat_ftp(struct sk_buff **pskb, - enum ip_conntrack_info ctinfo, - enum ip_ct_ftp_type type, - unsigned int matchoff, - unsigned int matchlen, - struct ip_conntrack_expect *exp, - u32 *seq) -{ - __be32 newip; - u_int16_t port; - int dir = CTINFO2DIR(ctinfo); - struct ip_conntrack *ct = exp->master; - - DEBUGP("FTP_NAT: type %i, off %u len %u\n", type, matchoff, matchlen); - - /* Connection will come from wherever this packet goes, hence !dir */ - newip = ct->tuplehash[!dir].tuple.dst.ip; - exp->saved_proto.tcp.port = exp->tuple.dst.u.tcp.port; - exp->dir = !dir; - - /* When you see the packet, we need to NAT it the same as the - * this one. */ - exp->expectfn = ip_nat_follow_master; - - /* Try to get same port: if not, try to change it. */ - for (port = ntohs(exp->saved_proto.tcp.port); port != 0; port++) { - exp->tuple.dst.u.tcp.port = htons(port); - if (ip_conntrack_expect_related(exp) == 0) - break; - } - - if (port == 0) - return NF_DROP; - - if (!mangle[type](pskb, newip, port, matchoff, matchlen, ct, ctinfo, - seq)) { - ip_conntrack_unexpect_related(exp); - return NF_DROP; - } - return NF_ACCEPT; -} - -static void __exit ip_nat_ftp_fini(void) -{ - rcu_assign_pointer(ip_nat_ftp_hook, NULL); - synchronize_rcu(); -} - -static int __init ip_nat_ftp_init(void) -{ - BUG_ON(rcu_dereference(ip_nat_ftp_hook)); - rcu_assign_pointer(ip_nat_ftp_hook, ip_nat_ftp); - return 0; -} - -/* Prior to 2.6.11, we had a ports param. No longer, but don't break users. */ -static int warn_set(const char *val, struct kernel_param *kp) -{ - printk(KERN_INFO KBUILD_MODNAME - ": kernel >= 2.6.10 only uses 'ports' for conntrack modules\n"); - return 0; -} -module_param_call(ports, warn_set, NULL, NULL, 0); - -module_init(ip_nat_ftp_init); -module_exit(ip_nat_ftp_fini); diff -puN net/ipv4/netfilter/ip_nat_helper.c~git-net /dev/null --- a/net/ipv4/netfilter/ip_nat_helper.c +++ /dev/null @@ -1,436 +0,0 @@ -/* ip_nat_helper.c - generic support functions for NAT helpers - * - * (C) 2000-2002 Harald Welte - * (C) 2003-2004 Netfilter Core Team - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - * - * 14 Jan 2002 Harald Welte : - * - add support for SACK adjustment - * 14 Mar 2002 Harald Welte : - * - merge SACK support into newnat API - * 16 Aug 2002 Brian J. Murrell : - * - make ip_nat_resize_packet more generic (TCP and UDP) - * - add ip_nat_mangle_udp_packet - */ -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#include -#include -#include -#include -#include -#include - -#if 0 -#define DEBUGP printk -#define DUMP_OFFSET(x) printk("offset_before=%d, offset_after=%d, correction_pos=%u\n", x->offset_before, x->offset_after, x->correction_pos); -#else -#define DEBUGP(format, args...) -#define DUMP_OFFSET(x) -#endif - -static DEFINE_SPINLOCK(ip_nat_seqofs_lock); - -/* Setup TCP sequence correction given this change at this sequence */ -static inline void -adjust_tcp_sequence(u32 seq, - int sizediff, - struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo) -{ - int dir; - struct ip_nat_seq *this_way, *other_way; - - DEBUGP("ip_nat_resize_packet: old_size = %u, new_size = %u\n", - (*skb)->len, new_size); - - dir = CTINFO2DIR(ctinfo); - - this_way = &ct->nat.info.seq[dir]; - other_way = &ct->nat.info.seq[!dir]; - - DEBUGP("ip_nat_resize_packet: Seq_offset before: "); - DUMP_OFFSET(this_way); - - spin_lock_bh(&ip_nat_seqofs_lock); - - /* SYN adjust. If it's uninitialized, or this is after last - * correction, record it: we don't handle more than one - * adjustment in the window, but do deal with common case of a - * retransmit */ - if (this_way->offset_before == this_way->offset_after - || before(this_way->correction_pos, seq)) { - this_way->correction_pos = seq; - this_way->offset_before = this_way->offset_after; - this_way->offset_after += sizediff; - } - spin_unlock_bh(&ip_nat_seqofs_lock); - - DEBUGP("ip_nat_resize_packet: Seq_offset after: "); - DUMP_OFFSET(this_way); -} - -/* Frobs data inside this packet, which is linear. */ -static void mangle_contents(struct sk_buff *skb, - unsigned int dataoff, - unsigned int match_offset, - unsigned int match_len, - const char *rep_buffer, - unsigned int rep_len) -{ - unsigned char *data; - - BUG_ON(skb_is_nonlinear(skb)); - data = (unsigned char *)skb->nh.iph + dataoff; - - /* move post-replacement */ - memmove(data + match_offset + rep_len, - data + match_offset + match_len, - skb->tail - (data + match_offset + match_len)); - - /* insert data from buffer */ - memcpy(data + match_offset, rep_buffer, rep_len); - - /* update skb info */ - if (rep_len > match_len) { - DEBUGP("ip_nat_mangle_packet: Extending packet by " - "%u from %u bytes\n", rep_len - match_len, - skb->len); - skb_put(skb, rep_len - match_len); - } else { - DEBUGP("ip_nat_mangle_packet: Shrinking packet from " - "%u from %u bytes\n", match_len - rep_len, - skb->len); - __skb_trim(skb, skb->len + rep_len - match_len); - } - - /* fix IP hdr checksum information */ - skb->nh.iph->tot_len = htons(skb->len); - ip_send_check(skb->nh.iph); -} - -/* Unusual, but possible case. */ -static int enlarge_skb(struct sk_buff **pskb, unsigned int extra) -{ - struct sk_buff *nskb; - - if ((*pskb)->len + extra > 65535) - return 0; - - nskb = skb_copy_expand(*pskb, skb_headroom(*pskb), extra, GFP_ATOMIC); - if (!nskb) - return 0; - - /* Transfer socket to new skb. */ - if ((*pskb)->sk) - skb_set_owner_w(nskb, (*pskb)->sk); - kfree_skb(*pskb); - *pskb = nskb; - return 1; -} - -/* Generic function for mangling variable-length address changes inside - * NATed TCP connections (like the PORT XXX,XXX,XXX,XXX,XXX,XXX - * command in FTP). - * - * Takes care about all the nasty sequence number changes, checksumming, - * skb enlargement, ... - * - * */ -int -ip_nat_mangle_tcp_packet(struct sk_buff **pskb, - struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - unsigned int match_offset, - unsigned int match_len, - const char *rep_buffer, - unsigned int rep_len) -{ - struct iphdr *iph; - struct tcphdr *tcph; - int oldlen, datalen; - - if (!skb_make_writable(pskb, (*pskb)->len)) - return 0; - - if (rep_len > match_len - && rep_len - match_len > skb_tailroom(*pskb) - && !enlarge_skb(pskb, rep_len - match_len)) - return 0; - - SKB_LINEAR_ASSERT(*pskb); - - iph = (*pskb)->nh.iph; - tcph = (void *)iph + iph->ihl*4; - - oldlen = (*pskb)->len - iph->ihl*4; - mangle_contents(*pskb, iph->ihl*4 + tcph->doff*4, - match_offset, match_len, rep_buffer, rep_len); - - datalen = (*pskb)->len - iph->ihl*4; - if ((*pskb)->ip_summed != CHECKSUM_PARTIAL) { - tcph->check = 0; - tcph->check = tcp_v4_check(datalen, - iph->saddr, iph->daddr, - csum_partial((char *)tcph, - datalen, 0)); - } else - nf_proto_csum_replace2(&tcph->check, *pskb, - htons(oldlen), htons(datalen), 1); - - if (rep_len != match_len) { - set_bit(IPS_SEQ_ADJUST_BIT, &ct->status); - adjust_tcp_sequence(ntohl(tcph->seq), - (int)rep_len - (int)match_len, - ct, ctinfo); - /* Tell TCP window tracking about seq change */ - ip_conntrack_tcp_update(*pskb, ct, CTINFO2DIR(ctinfo)); - } - return 1; -} -EXPORT_SYMBOL(ip_nat_mangle_tcp_packet); - -/* Generic function for mangling variable-length address changes inside - * NATed UDP connections (like the CONNECT DATA XXXXX MESG XXXXX INDEX XXXXX - * command in the Amanda protocol) - * - * Takes care about all the nasty sequence number changes, checksumming, - * skb enlargement, ... - * - * XXX - This function could be merged with ip_nat_mangle_tcp_packet which - * should be fairly easy to do. - */ -int -ip_nat_mangle_udp_packet(struct sk_buff **pskb, - struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - unsigned int match_offset, - unsigned int match_len, - const char *rep_buffer, - unsigned int rep_len) -{ - struct iphdr *iph; - struct udphdr *udph; - int datalen, oldlen; - - /* UDP helpers might accidentally mangle the wrong packet */ - iph = (*pskb)->nh.iph; - if ((*pskb)->len < iph->ihl*4 + sizeof(*udph) + - match_offset + match_len) - return 0; - - if (!skb_make_writable(pskb, (*pskb)->len)) - return 0; - - if (rep_len > match_len - && rep_len - match_len > skb_tailroom(*pskb) - && !enlarge_skb(pskb, rep_len - match_len)) - return 0; - - iph = (*pskb)->nh.iph; - udph = (void *)iph + iph->ihl*4; - - oldlen = (*pskb)->len - iph->ihl*4; - mangle_contents(*pskb, iph->ihl*4 + sizeof(*udph), - match_offset, match_len, rep_buffer, rep_len); - - /* update the length of the UDP packet */ - datalen = (*pskb)->len - iph->ihl*4; - udph->len = htons(datalen); - - /* fix udp checksum if udp checksum was previously calculated */ - if (!udph->check && (*pskb)->ip_summed != CHECKSUM_PARTIAL) - return 1; - - if ((*pskb)->ip_summed != CHECKSUM_PARTIAL) { - udph->check = 0; - udph->check = csum_tcpudp_magic(iph->saddr, iph->daddr, - datalen, IPPROTO_UDP, - csum_partial((char *)udph, - datalen, 0)); - if (!udph->check) - udph->check = CSUM_MANGLED_0; - } else - nf_proto_csum_replace2(&udph->check, *pskb, - htons(oldlen), htons(datalen), 1); - return 1; -} -EXPORT_SYMBOL(ip_nat_mangle_udp_packet); - -/* Adjust one found SACK option including checksum correction */ -static void -sack_adjust(struct sk_buff *skb, - struct tcphdr *tcph, - unsigned int sackoff, - unsigned int sackend, - struct ip_nat_seq *natseq) -{ - while (sackoff < sackend) { - struct tcp_sack_block_wire *sack; - __be32 new_start_seq, new_end_seq; - - sack = (void *)skb->data + sackoff; - if (after(ntohl(sack->start_seq) - natseq->offset_before, - natseq->correction_pos)) - new_start_seq = htonl(ntohl(sack->start_seq) - - natseq->offset_after); - else - new_start_seq = htonl(ntohl(sack->start_seq) - - natseq->offset_before); - - if (after(ntohl(sack->end_seq) - natseq->offset_before, - natseq->correction_pos)) - new_end_seq = htonl(ntohl(sack->end_seq) - - natseq->offset_after); - else - new_end_seq = htonl(ntohl(sack->end_seq) - - natseq->offset_before); - - DEBUGP("sack_adjust: start_seq: %d->%d, end_seq: %d->%d\n", - ntohl(sack->start_seq), new_start_seq, - ntohl(sack->end_seq), new_end_seq); - - nf_proto_csum_replace4(&tcph->check, skb, - sack->start_seq, new_start_seq, 0); - nf_proto_csum_replace4(&tcph->check, skb, - sack->end_seq, new_end_seq, 0); - sack->start_seq = new_start_seq; - sack->end_seq = new_end_seq; - sackoff += sizeof(*sack); - } -} - -/* TCP SACK sequence number adjustment */ -static inline unsigned int -ip_nat_sack_adjust(struct sk_buff **pskb, - struct tcphdr *tcph, - struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo) -{ - unsigned int dir, optoff, optend; - - optoff = (*pskb)->nh.iph->ihl*4 + sizeof(struct tcphdr); - optend = (*pskb)->nh.iph->ihl*4 + tcph->doff*4; - - if (!skb_make_writable(pskb, optend)) - return 0; - - dir = CTINFO2DIR(ctinfo); - - while (optoff < optend) { - /* Usually: option, length. */ - unsigned char *op = (*pskb)->data + optoff; - - switch (op[0]) { - case TCPOPT_EOL: - return 1; - case TCPOPT_NOP: - optoff++; - continue; - default: - /* no partial options */ - if (optoff + 1 == optend - || optoff + op[1] > optend - || op[1] < 2) - return 0; - if (op[0] == TCPOPT_SACK - && op[1] >= 2+TCPOLEN_SACK_PERBLOCK - && ((op[1] - 2) % TCPOLEN_SACK_PERBLOCK) == 0) - sack_adjust(*pskb, tcph, optoff+2, - optoff+op[1], - &ct->nat.info.seq[!dir]); - optoff += op[1]; - } - } - return 1; -} - -/* TCP sequence number adjustment. Returns 1 on success, 0 on failure */ -int -ip_nat_seq_adjust(struct sk_buff **pskb, - struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo) -{ - struct tcphdr *tcph; - int dir; - __be32 newseq, newack; - struct ip_nat_seq *this_way, *other_way; - - dir = CTINFO2DIR(ctinfo); - - this_way = &ct->nat.info.seq[dir]; - other_way = &ct->nat.info.seq[!dir]; - - if (!skb_make_writable(pskb, (*pskb)->nh.iph->ihl*4+sizeof(*tcph))) - return 0; - - tcph = (void *)(*pskb)->data + (*pskb)->nh.iph->ihl*4; - if (after(ntohl(tcph->seq), this_way->correction_pos)) - newseq = htonl(ntohl(tcph->seq) + this_way->offset_after); - else - newseq = htonl(ntohl(tcph->seq) + this_way->offset_before); - - if (after(ntohl(tcph->ack_seq) - other_way->offset_before, - other_way->correction_pos)) - newack = htonl(ntohl(tcph->ack_seq) - other_way->offset_after); - else - newack = htonl(ntohl(tcph->ack_seq) - other_way->offset_before); - - nf_proto_csum_replace4(&tcph->check, *pskb, tcph->seq, newseq, 0); - nf_proto_csum_replace4(&tcph->check, *pskb, tcph->ack_seq, newack, 0); - - DEBUGP("Adjusting sequence number from %u->%u, ack from %u->%u\n", - ntohl(tcph->seq), ntohl(newseq), ntohl(tcph->ack_seq), - ntohl(newack)); - - tcph->seq = newseq; - tcph->ack_seq = newack; - - if (!ip_nat_sack_adjust(pskb, tcph, ct, ctinfo)) - return 0; - - ip_conntrack_tcp_update(*pskb, ct, dir); - - return 1; -} -EXPORT_SYMBOL(ip_nat_seq_adjust); - -/* Setup NAT on this expected conntrack so it follows master. */ -/* If we fail to get a free NAT slot, we'll get dropped on confirm */ -void ip_nat_follow_master(struct ip_conntrack *ct, - struct ip_conntrack_expect *exp) -{ - struct ip_nat_range range; - - /* This must be a fresh one. */ - BUG_ON(ct->status & IPS_NAT_DONE_MASK); - - /* Change src to where master sends to */ - range.flags = IP_NAT_RANGE_MAP_IPS; - range.min_ip = range.max_ip - = ct->master->tuplehash[!exp->dir].tuple.dst.ip; - /* hook doesn't matter, but it has to do source manip */ - ip_nat_setup_info(ct, &range, NF_IP_POST_ROUTING); - - /* For DST manip, map port here to where it's expected. */ - range.flags = (IP_NAT_RANGE_MAP_IPS | IP_NAT_RANGE_PROTO_SPECIFIED); - range.min = range.max = exp->saved_proto; - range.min_ip = range.max_ip - = ct->master->tuplehash[!exp->dir].tuple.src.ip; - /* hook doesn't matter, but it has to do destination manip */ - ip_nat_setup_info(ct, &range, NF_IP_PRE_ROUTING); -} -EXPORT_SYMBOL(ip_nat_follow_master); diff -puN net/ipv4/netfilter/ip_nat_helper_h323.c~git-net /dev/null --- a/net/ipv4/netfilter/ip_nat_helper_h323.c +++ /dev/null @@ -1,611 +0,0 @@ -/* - * H.323 extension for NAT alteration. - * - * Copyright (c) 2006 Jing Min Zhao - * - * This source code is licensed under General Public License version 2. - * - * Based on the 'brute force' H.323 NAT module by - * Jozsef Kadlecsik - */ - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#if 0 -#define DEBUGP printk -#else -#define DEBUGP(format, args...) -#endif - -/****************************************************************************/ -static int set_addr(struct sk_buff **pskb, - unsigned char **data, int dataoff, - unsigned int addroff, __be32 ip, u_int16_t port) -{ - enum ip_conntrack_info ctinfo; - struct ip_conntrack *ct = ip_conntrack_get(*pskb, &ctinfo); - struct { - __be32 ip; - __be16 port; - } __attribute__ ((__packed__)) buf; - struct tcphdr _tcph, *th; - - buf.ip = ip; - buf.port = htons(port); - addroff += dataoff; - - if ((*pskb)->nh.iph->protocol == IPPROTO_TCP) { - if (!ip_nat_mangle_tcp_packet(pskb, ct, ctinfo, - addroff, sizeof(buf), - (char *) &buf, sizeof(buf))) { - if (net_ratelimit()) - printk("ip_nat_h323: ip_nat_mangle_tcp_packet" - " error\n"); - return -1; - } - - /* Relocate data pointer */ - th = skb_header_pointer(*pskb, (*pskb)->nh.iph->ihl * 4, - sizeof(_tcph), &_tcph); - if (th == NULL) - return -1; - *data = (*pskb)->data + (*pskb)->nh.iph->ihl * 4 + - th->doff * 4 + dataoff; - } else { - if (!ip_nat_mangle_udp_packet(pskb, ct, ctinfo, - addroff, sizeof(buf), - (char *) &buf, sizeof(buf))) { - if (net_ratelimit()) - printk("ip_nat_h323: ip_nat_mangle_udp_packet" - " error\n"); - return -1; - } - /* ip_nat_mangle_udp_packet uses skb_make_writable() to copy - * or pull everything in a linear buffer, so we can safely - * use the skb pointers now */ - *data = (*pskb)->data + (*pskb)->nh.iph->ihl * 4 + - sizeof(struct udphdr); - } - - return 0; -} - -/****************************************************************************/ -static int set_h225_addr(struct sk_buff **pskb, - unsigned char **data, int dataoff, - TransportAddress * addr, - __be32 ip, u_int16_t port) -{ - return set_addr(pskb, data, dataoff, addr->ipAddress.ip, ip, port); -} - -/****************************************************************************/ -static int set_h245_addr(struct sk_buff **pskb, - unsigned char **data, int dataoff, - H245_TransportAddress * addr, - __be32 ip, u_int16_t port) -{ - return set_addr(pskb, data, dataoff, - addr->unicastAddress.iPAddress.network, ip, port); -} - -/****************************************************************************/ -static int set_sig_addr(struct sk_buff **pskb, struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - unsigned char **data, - TransportAddress * addr, int count) -{ - struct ip_ct_h323_master *info = &ct->help.ct_h323_info; - int dir = CTINFO2DIR(ctinfo); - int i; - __be32 ip; - u_int16_t port; - - for (i = 0; i < count; i++) { - if (get_h225_addr(*data, &addr[i], &ip, &port)) { - if (ip == ct->tuplehash[dir].tuple.src.ip && - port == info->sig_port[dir]) { - /* GW->GK */ - - /* Fix for Gnomemeeting */ - if (i > 0 && - get_h225_addr(*data, &addr[0], - &ip, &port) && - (ntohl(ip) & 0xff000000) == 0x7f000000) - i = 0; - - DEBUGP - ("ip_nat_ras: set signal address " - "%u.%u.%u.%u:%hu->%u.%u.%u.%u:%hu\n", - NIPQUAD(ip), port, - NIPQUAD(ct->tuplehash[!dir].tuple.dst. - ip), info->sig_port[!dir]); - return set_h225_addr(pskb, data, 0, &addr[i], - ct->tuplehash[!dir]. - tuple.dst.ip, - info->sig_port[!dir]); - } else if (ip == ct->tuplehash[dir].tuple.dst.ip && - port == info->sig_port[dir]) { - /* GK->GW */ - DEBUGP - ("ip_nat_ras: set signal address " - "%u.%u.%u.%u:%hu->%u.%u.%u.%u:%hu\n", - NIPQUAD(ip), port, - NIPQUAD(ct->tuplehash[!dir].tuple.src. - ip), info->sig_port[!dir]); - return set_h225_addr(pskb, data, 0, &addr[i], - ct->tuplehash[!dir]. - tuple.src.ip, - info->sig_port[!dir]); - } - } - } - - return 0; -} - -/****************************************************************************/ -static int set_ras_addr(struct sk_buff **pskb, struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - unsigned char **data, - TransportAddress * addr, int count) -{ - int dir = CTINFO2DIR(ctinfo); - int i; - __be32 ip; - u_int16_t port; - - for (i = 0; i < count; i++) { - if (get_h225_addr(*data, &addr[i], &ip, &port) && - ip == ct->tuplehash[dir].tuple.src.ip && - port == ntohs(ct->tuplehash[dir].tuple.src.u.udp.port)) { - DEBUGP("ip_nat_ras: set rasAddress " - "%u.%u.%u.%u:%hu->%u.%u.%u.%u:%hu\n", - NIPQUAD(ip), port, - NIPQUAD(ct->tuplehash[!dir].tuple.dst.ip), - ntohs(ct->tuplehash[!dir].tuple.dst.u.udp. - port)); - return set_h225_addr(pskb, data, 0, &addr[i], - ct->tuplehash[!dir].tuple.dst.ip, - ntohs(ct->tuplehash[!dir].tuple. - dst.u.udp.port)); - } - } - - return 0; -} - -/****************************************************************************/ -static int nat_rtp_rtcp(struct sk_buff **pskb, struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - unsigned char **data, int dataoff, - H245_TransportAddress * addr, - u_int16_t port, u_int16_t rtp_port, - struct ip_conntrack_expect *rtp_exp, - struct ip_conntrack_expect *rtcp_exp) -{ - struct ip_ct_h323_master *info = &ct->help.ct_h323_info; - int dir = CTINFO2DIR(ctinfo); - int i; - u_int16_t nated_port; - - /* Set expectations for NAT */ - rtp_exp->saved_proto.udp.port = rtp_exp->tuple.dst.u.udp.port; - rtp_exp->expectfn = ip_nat_follow_master; - rtp_exp->dir = !dir; - rtcp_exp->saved_proto.udp.port = rtcp_exp->tuple.dst.u.udp.port; - rtcp_exp->expectfn = ip_nat_follow_master; - rtcp_exp->dir = !dir; - - /* Lookup existing expects */ - for (i = 0; i < H323_RTP_CHANNEL_MAX; i++) { - if (info->rtp_port[i][dir] == rtp_port) { - /* Expected */ - - /* Use allocated ports first. This will refresh - * the expects */ - rtp_exp->tuple.dst.u.udp.port = - htons(info->rtp_port[i][dir]); - rtcp_exp->tuple.dst.u.udp.port = - htons(info->rtp_port[i][dir] + 1); - break; - } else if (info->rtp_port[i][dir] == 0) { - /* Not expected */ - break; - } - } - - /* Run out of expectations */ - if (i >= H323_RTP_CHANNEL_MAX) { - if (net_ratelimit()) - printk("ip_nat_h323: out of expectations\n"); - return 0; - } - - /* Try to get a pair of ports. */ - for (nated_port = ntohs(rtp_exp->tuple.dst.u.udp.port); - nated_port != 0; nated_port += 2) { - rtp_exp->tuple.dst.u.udp.port = htons(nated_port); - if (ip_conntrack_expect_related(rtp_exp) == 0) { - rtcp_exp->tuple.dst.u.udp.port = - htons(nated_port + 1); - if (ip_conntrack_expect_related(rtcp_exp) == 0) - break; - ip_conntrack_unexpect_related(rtp_exp); - } - } - - if (nated_port == 0) { /* No port available */ - if (net_ratelimit()) - printk("ip_nat_h323: out of RTP ports\n"); - return 0; - } - - /* Modify signal */ - if (set_h245_addr(pskb, data, dataoff, addr, - ct->tuplehash[!dir].tuple.dst.ip, - (port & 1) ? nated_port + 1 : nated_port) == 0) { - /* Save ports */ - info->rtp_port[i][dir] = rtp_port; - info->rtp_port[i][!dir] = nated_port; - } else { - ip_conntrack_unexpect_related(rtp_exp); - ip_conntrack_unexpect_related(rtcp_exp); - return -1; - } - - /* Success */ - DEBUGP("ip_nat_h323: expect RTP %u.%u.%u.%u:%hu->%u.%u.%u.%u:%hu\n", - NIPQUAD(rtp_exp->tuple.src.ip), - ntohs(rtp_exp->tuple.src.u.udp.port), - NIPQUAD(rtp_exp->tuple.dst.ip), - ntohs(rtp_exp->tuple.dst.u.udp.port)); - DEBUGP("ip_nat_h323: expect RTCP %u.%u.%u.%u:%hu->%u.%u.%u.%u:%hu\n", - NIPQUAD(rtcp_exp->tuple.src.ip), - ntohs(rtcp_exp->tuple.src.u.udp.port), - NIPQUAD(rtcp_exp->tuple.dst.ip), - ntohs(rtcp_exp->tuple.dst.u.udp.port)); - - return 0; -} - -/****************************************************************************/ -static int nat_t120(struct sk_buff **pskb, struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - unsigned char **data, int dataoff, - H245_TransportAddress * addr, u_int16_t port, - struct ip_conntrack_expect *exp) -{ - int dir = CTINFO2DIR(ctinfo); - u_int16_t nated_port = port; - - /* Set expectations for NAT */ - exp->saved_proto.tcp.port = exp->tuple.dst.u.tcp.port; - exp->expectfn = ip_nat_follow_master; - exp->dir = !dir; - - /* Try to get same port: if not, try to change it. */ - for (; nated_port != 0; nated_port++) { - exp->tuple.dst.u.tcp.port = htons(nated_port); - if (ip_conntrack_expect_related(exp) == 0) - break; - } - - if (nated_port == 0) { /* No port available */ - if (net_ratelimit()) - printk("ip_nat_h323: out of TCP ports\n"); - return 0; - } - - /* Modify signal */ - if (set_h245_addr(pskb, data, dataoff, addr, - ct->tuplehash[!dir].tuple.dst.ip, nated_port) < 0) { - ip_conntrack_unexpect_related(exp); - return -1; - } - - DEBUGP("ip_nat_h323: expect T.120 %u.%u.%u.%u:%hu->%u.%u.%u.%u:%hu\n", - NIPQUAD(exp->tuple.src.ip), ntohs(exp->tuple.src.u.tcp.port), - NIPQUAD(exp->tuple.dst.ip), ntohs(exp->tuple.dst.u.tcp.port)); - - return 0; -} - -/**************************************************************************** - * This conntrack expect function replaces ip_conntrack_h245_expect() - * which was set by ip_conntrack_helper_h323.c. It calls both - * ip_nat_follow_master() and ip_conntrack_h245_expect() - ****************************************************************************/ -static void ip_nat_h245_expect(struct ip_conntrack *new, - struct ip_conntrack_expect *this) -{ - ip_nat_follow_master(new, this); - ip_conntrack_h245_expect(new, this); -} - -/****************************************************************************/ -static int nat_h245(struct sk_buff **pskb, struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - unsigned char **data, int dataoff, - TransportAddress * addr, u_int16_t port, - struct ip_conntrack_expect *exp) -{ - struct ip_ct_h323_master *info = &ct->help.ct_h323_info; - int dir = CTINFO2DIR(ctinfo); - u_int16_t nated_port = port; - - /* Set expectations for NAT */ - exp->saved_proto.tcp.port = exp->tuple.dst.u.tcp.port; - exp->expectfn = ip_nat_h245_expect; - exp->dir = !dir; - - /* Check existing expects */ - if (info->sig_port[dir] == port) - nated_port = info->sig_port[!dir]; - - /* Try to get same port: if not, try to change it. */ - for (; nated_port != 0; nated_port++) { - exp->tuple.dst.u.tcp.port = htons(nated_port); - if (ip_conntrack_expect_related(exp) == 0) - break; - } - - if (nated_port == 0) { /* No port available */ - if (net_ratelimit()) - printk("ip_nat_q931: out of TCP ports\n"); - return 0; - } - - /* Modify signal */ - if (set_h225_addr(pskb, data, dataoff, addr, - ct->tuplehash[!dir].tuple.dst.ip, - nated_port) == 0) { - /* Save ports */ - info->sig_port[dir] = port; - info->sig_port[!dir] = nated_port; - } else { - ip_conntrack_unexpect_related(exp); - return -1; - } - - DEBUGP("ip_nat_q931: expect H.245 %u.%u.%u.%u:%hu->%u.%u.%u.%u:%hu\n", - NIPQUAD(exp->tuple.src.ip), ntohs(exp->tuple.src.u.tcp.port), - NIPQUAD(exp->tuple.dst.ip), ntohs(exp->tuple.dst.u.tcp.port)); - - return 0; -} - -/**************************************************************************** - * This conntrack expect function replaces ip_conntrack_q931_expect() - * which was set by ip_conntrack_helper_h323.c. - ****************************************************************************/ -static void ip_nat_q931_expect(struct ip_conntrack *new, - struct ip_conntrack_expect *this) -{ - struct ip_nat_range range; - - if (this->tuple.src.ip != 0) { /* Only accept calls from GK */ - ip_nat_follow_master(new, this); - goto out; - } - - /* This must be a fresh one. */ - BUG_ON(new->status & IPS_NAT_DONE_MASK); - - /* Change src to where master sends to */ - range.flags = IP_NAT_RANGE_MAP_IPS; - range.min_ip = range.max_ip = new->tuplehash[!this->dir].tuple.src.ip; - - /* hook doesn't matter, but it has to do source manip */ - ip_nat_setup_info(new, &range, NF_IP_POST_ROUTING); - - /* For DST manip, map port here to where it's expected. */ - range.flags = (IP_NAT_RANGE_MAP_IPS | IP_NAT_RANGE_PROTO_SPECIFIED); - range.min = range.max = this->saved_proto; - range.min_ip = range.max_ip = - new->master->tuplehash[!this->dir].tuple.src.ip; - - /* hook doesn't matter, but it has to do destination manip */ - ip_nat_setup_info(new, &range, NF_IP_PRE_ROUTING); - - out: - ip_conntrack_q931_expect(new, this); -} - -/****************************************************************************/ -static int nat_q931(struct sk_buff **pskb, struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - unsigned char **data, TransportAddress * addr, int idx, - u_int16_t port, struct ip_conntrack_expect *exp) -{ - struct ip_ct_h323_master *info = &ct->help.ct_h323_info; - int dir = CTINFO2DIR(ctinfo); - u_int16_t nated_port = port; - __be32 ip; - - /* Set expectations for NAT */ - exp->saved_proto.tcp.port = exp->tuple.dst.u.tcp.port; - exp->expectfn = ip_nat_q931_expect; - exp->dir = !dir; - - /* Check existing expects */ - if (info->sig_port[dir] == port) - nated_port = info->sig_port[!dir]; - - /* Try to get same port: if not, try to change it. */ - for (; nated_port != 0; nated_port++) { - exp->tuple.dst.u.tcp.port = htons(nated_port); - if (ip_conntrack_expect_related(exp) == 0) - break; - } - - if (nated_port == 0) { /* No port available */ - if (net_ratelimit()) - printk("ip_nat_ras: out of TCP ports\n"); - return 0; - } - - /* Modify signal */ - if (set_h225_addr(pskb, data, 0, &addr[idx], - ct->tuplehash[!dir].tuple.dst.ip, - nated_port) == 0) { - /* Save ports */ - info->sig_port[dir] = port; - info->sig_port[!dir] = nated_port; - - /* Fix for Gnomemeeting */ - if (idx > 0 && - get_h225_addr(*data, &addr[0], &ip, &port) && - (ntohl(ip) & 0xff000000) == 0x7f000000) { - set_h225_addr_hook(pskb, data, 0, &addr[0], - ct->tuplehash[!dir].tuple.dst.ip, - info->sig_port[!dir]); - } - } else { - ip_conntrack_unexpect_related(exp); - return -1; - } - - /* Success */ - DEBUGP("ip_nat_ras: expect Q.931 %u.%u.%u.%u:%hu->%u.%u.%u.%u:%hu\n", - NIPQUAD(exp->tuple.src.ip), ntohs(exp->tuple.src.u.tcp.port), - NIPQUAD(exp->tuple.dst.ip), ntohs(exp->tuple.dst.u.tcp.port)); - - return 0; -} - -/****************************************************************************/ -static void ip_nat_callforwarding_expect(struct ip_conntrack *new, - struct ip_conntrack_expect *this) -{ - struct ip_nat_range range; - - /* This must be a fresh one. */ - BUG_ON(new->status & IPS_NAT_DONE_MASK); - - /* Change src to where master sends to */ - range.flags = IP_NAT_RANGE_MAP_IPS; - range.min_ip = range.max_ip = new->tuplehash[!this->dir].tuple.src.ip; - - /* hook doesn't matter, but it has to do source manip */ - ip_nat_setup_info(new, &range, NF_IP_POST_ROUTING); - - /* For DST manip, map port here to where it's expected. */ - range.flags = (IP_NAT_RANGE_MAP_IPS | IP_NAT_RANGE_PROTO_SPECIFIED); - range.min = range.max = this->saved_proto; - range.min_ip = range.max_ip = this->saved_ip; - - /* hook doesn't matter, but it has to do destination manip */ - ip_nat_setup_info(new, &range, NF_IP_PRE_ROUTING); - - ip_conntrack_q931_expect(new, this); -} - -/****************************************************************************/ -static int nat_callforwarding(struct sk_buff **pskb, struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - unsigned char **data, int dataoff, - TransportAddress * addr, u_int16_t port, - struct ip_conntrack_expect *exp) -{ - int dir = CTINFO2DIR(ctinfo); - u_int16_t nated_port; - - /* Set expectations for NAT */ - exp->saved_ip = exp->tuple.dst.ip; - exp->tuple.dst.ip = ct->tuplehash[!dir].tuple.dst.ip; - exp->saved_proto.tcp.port = exp->tuple.dst.u.tcp.port; - exp->expectfn = ip_nat_callforwarding_expect; - exp->dir = !dir; - - /* Try to get same port: if not, try to change it. */ - for (nated_port = port; nated_port != 0; nated_port++) { - exp->tuple.dst.u.tcp.port = htons(nated_port); - if (ip_conntrack_expect_related(exp) == 0) - break; - } - - if (nated_port == 0) { /* No port available */ - if (net_ratelimit()) - printk("ip_nat_q931: out of TCP ports\n"); - return 0; - } - - /* Modify signal */ - if (!set_h225_addr(pskb, data, dataoff, addr, - ct->tuplehash[!dir].tuple.dst.ip, - nated_port) == 0) { - ip_conntrack_unexpect_related(exp); - return -1; - } - - /* Success */ - DEBUGP("ip_nat_q931: expect Call Forwarding " - "%u.%u.%u.%u:%hu->%u.%u.%u.%u:%hu\n", - NIPQUAD(exp->tuple.src.ip), ntohs(exp->tuple.src.u.tcp.port), - NIPQUAD(exp->tuple.dst.ip), ntohs(exp->tuple.dst.u.tcp.port)); - - return 0; -} - -/****************************************************************************/ -static int __init init(void) -{ - BUG_ON(rcu_dereference(set_h245_addr_hook) != NULL); - BUG_ON(rcu_dereference(set_h225_addr_hook) != NULL); - BUG_ON(rcu_dereference(set_sig_addr_hook) != NULL); - BUG_ON(rcu_dereference(set_ras_addr_hook) != NULL); - BUG_ON(rcu_dereference(nat_rtp_rtcp_hook) != NULL); - BUG_ON(rcu_dereference(nat_t120_hook) != NULL); - BUG_ON(rcu_dereference(nat_h245_hook) != NULL); - BUG_ON(rcu_dereference(nat_callforwarding_hook) != NULL); - BUG_ON(rcu_dereference(nat_q931_hook) != NULL); - - rcu_assign_pointer(set_h245_addr_hook, set_h245_addr); - rcu_assign_pointer(set_h225_addr_hook, set_h225_addr); - rcu_assign_pointer(set_sig_addr_hook, set_sig_addr); - rcu_assign_pointer(set_ras_addr_hook, set_ras_addr); - rcu_assign_pointer(nat_rtp_rtcp_hook, nat_rtp_rtcp); - rcu_assign_pointer(nat_t120_hook, nat_t120); - rcu_assign_pointer(nat_h245_hook, nat_h245); - rcu_assign_pointer(nat_callforwarding_hook, nat_callforwarding); - rcu_assign_pointer(nat_q931_hook, nat_q931); - - DEBUGP("ip_nat_h323: init success\n"); - return 0; -} - -/****************************************************************************/ -static void __exit fini(void) -{ - rcu_assign_pointer(set_h245_addr_hook, NULL); - rcu_assign_pointer(set_h225_addr_hook, NULL); - rcu_assign_pointer(set_sig_addr_hook, NULL); - rcu_assign_pointer(set_ras_addr_hook, NULL); - rcu_assign_pointer(nat_rtp_rtcp_hook, NULL); - rcu_assign_pointer(nat_t120_hook, NULL); - rcu_assign_pointer(nat_h245_hook, NULL); - rcu_assign_pointer(nat_callforwarding_hook, NULL); - rcu_assign_pointer(nat_q931_hook, NULL); - synchronize_rcu(); -} - -/****************************************************************************/ -module_init(init); -module_exit(fini); - -MODULE_AUTHOR("Jing Min Zhao "); -MODULE_DESCRIPTION("H.323 NAT helper"); -MODULE_LICENSE("GPL"); diff -puN net/ipv4/netfilter/ip_nat_helper_pptp.c~git-net /dev/null --- a/net/ipv4/netfilter/ip_nat_helper_pptp.c +++ /dev/null @@ -1,350 +0,0 @@ -/* - * ip_nat_pptp.c - Version 3.0 - * - * NAT support for PPTP (Point to Point Tunneling Protocol). - * PPTP is a a protocol for creating virtual private networks. - * It is a specification defined by Microsoft and some vendors - * working with Microsoft. PPTP is built on top of a modified - * version of the Internet Generic Routing Encapsulation Protocol. - * GRE is defined in RFC 1701 and RFC 1702. Documentation of - * PPTP can be found in RFC 2637 - * - * (C) 2000-2005 by Harald Welte - * - * Development of this code funded by Astaro AG (http://www.astaro.com/) - * - * TODO: - NAT to a unique tuple, not to TCP source port - * (needs netfilter tuple reservation) - * - * Changes: - * 2002-02-10 - Version 1.3 - * - Use ip_nat_mangle_tcp_packet() because of cloned skb's - * in local connections (Philip Craig ) - * - add checks for magicCookie and pptp version - * - make argument list of pptp_{out,in}bound_packet() shorter - * - move to C99 style initializers - * - print version number at module loadtime - * 2003-09-22 - Version 1.5 - * - use SNATed tcp sourceport as callid, since we get called before - * TCP header is mangled (Philip Craig ) - * 2004-10-22 - Version 2.0 - * - kernel 2.6.x version - * 2005-06-10 - Version 3.0 - * - kernel >= 2.6.11 version, - * funded by Oxcoda NetBox Blue (http://www.netboxblue.com/) - * - */ - -#include -#include -#include -#include - -#include -#include -#include -#include -#include -#include -#include -#include - -#define IP_NAT_PPTP_VERSION "3.0" - -#define REQ_CID(req, off) (*(__be16 *)((char *)(req) + (off))) - -MODULE_LICENSE("GPL"); -MODULE_AUTHOR("Harald Welte "); -MODULE_DESCRIPTION("Netfilter NAT helper module for PPTP"); - - -#if 0 -extern const char *pptp_msg_name[]; -#define DEBUGP(format, args...) printk(KERN_DEBUG "%s:%s: " format, __FILE__, \ - __FUNCTION__, ## args) -#else -#define DEBUGP(format, args...) -#endif - -static void pptp_nat_expected(struct ip_conntrack *ct, - struct ip_conntrack_expect *exp) -{ - struct ip_conntrack *master = ct->master; - struct ip_conntrack_expect *other_exp; - struct ip_conntrack_tuple t; - struct ip_ct_pptp_master *ct_pptp_info; - struct ip_nat_pptp *nat_pptp_info; - struct ip_nat_range range; - - ct_pptp_info = &master->help.ct_pptp_info; - nat_pptp_info = &master->nat.help.nat_pptp_info; - - /* And here goes the grand finale of corrosion... */ - - if (exp->dir == IP_CT_DIR_ORIGINAL) { - DEBUGP("we are PNS->PAC\n"); - /* therefore, build tuple for PAC->PNS */ - t.src.ip = master->tuplehash[IP_CT_DIR_REPLY].tuple.src.ip; - t.src.u.gre.key = master->help.ct_pptp_info.pac_call_id; - t.dst.ip = master->tuplehash[IP_CT_DIR_REPLY].tuple.dst.ip; - t.dst.u.gre.key = master->help.ct_pptp_info.pns_call_id; - t.dst.protonum = IPPROTO_GRE; - } else { - DEBUGP("we are PAC->PNS\n"); - /* build tuple for PNS->PAC */ - t.src.ip = master->tuplehash[IP_CT_DIR_ORIGINAL].tuple.src.ip; - t.src.u.gre.key = master->nat.help.nat_pptp_info.pns_call_id; - t.dst.ip = master->tuplehash[IP_CT_DIR_ORIGINAL].tuple.dst.ip; - t.dst.u.gre.key = master->nat.help.nat_pptp_info.pac_call_id; - t.dst.protonum = IPPROTO_GRE; - } - - DEBUGP("trying to unexpect other dir: "); - DUMP_TUPLE(&t); - other_exp = ip_conntrack_expect_find_get(&t); - if (other_exp) { - ip_conntrack_unexpect_related(other_exp); - ip_conntrack_expect_put(other_exp); - DEBUGP("success\n"); - } else { - DEBUGP("not found!\n"); - } - - /* This must be a fresh one. */ - BUG_ON(ct->status & IPS_NAT_DONE_MASK); - - /* Change src to where master sends to */ - range.flags = IP_NAT_RANGE_MAP_IPS; - range.min_ip = range.max_ip - = ct->master->tuplehash[!exp->dir].tuple.dst.ip; - if (exp->dir == IP_CT_DIR_ORIGINAL) { - range.flags |= IP_NAT_RANGE_PROTO_SPECIFIED; - range.min = range.max = exp->saved_proto; - } - /* hook doesn't matter, but it has to do source manip */ - ip_nat_setup_info(ct, &range, NF_IP_POST_ROUTING); - - /* For DST manip, map port here to where it's expected. */ - range.flags = IP_NAT_RANGE_MAP_IPS; - range.min_ip = range.max_ip - = ct->master->tuplehash[!exp->dir].tuple.src.ip; - if (exp->dir == IP_CT_DIR_REPLY) { - range.flags |= IP_NAT_RANGE_PROTO_SPECIFIED; - range.min = range.max = exp->saved_proto; - } - /* hook doesn't matter, but it has to do destination manip */ - ip_nat_setup_info(ct, &range, NF_IP_PRE_ROUTING); -} - -/* outbound packets == from PNS to PAC */ -static int -pptp_outbound_pkt(struct sk_buff **pskb, - struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - struct PptpControlHeader *ctlh, - union pptp_ctrl_union *pptpReq) - -{ - struct ip_ct_pptp_master *ct_pptp_info = &ct->help.ct_pptp_info; - struct ip_nat_pptp *nat_pptp_info = &ct->nat.help.nat_pptp_info; - u_int16_t msg; - __be16 new_callid; - unsigned int cid_off; - - new_callid = ct_pptp_info->pns_call_id; - - switch (msg = ntohs(ctlh->messageType)) { - case PPTP_OUT_CALL_REQUEST: - cid_off = offsetof(union pptp_ctrl_union, ocreq.callID); - /* FIXME: ideally we would want to reserve a call ID - * here. current netfilter NAT core is not able to do - * this :( For now we use TCP source port. This breaks - * multiple calls within one control session */ - - /* save original call ID in nat_info */ - nat_pptp_info->pns_call_id = ct_pptp_info->pns_call_id; - - /* don't use tcph->source since we are at a DSTmanip - * hook (e.g. PREROUTING) and pkt is not mangled yet */ - new_callid = ct->tuplehash[IP_CT_DIR_REPLY].tuple.dst.u.tcp.port; - - /* save new call ID in ct info */ - ct_pptp_info->pns_call_id = new_callid; - break; - case PPTP_IN_CALL_REPLY: - cid_off = offsetof(union pptp_ctrl_union, icack.callID); - break; - case PPTP_CALL_CLEAR_REQUEST: - cid_off = offsetof(union pptp_ctrl_union, clrreq.callID); - break; - default: - DEBUGP("unknown outbound packet 0x%04x:%s\n", msg, - (msg <= PPTP_MSG_MAX)? - pptp_msg_name[msg]:pptp_msg_name[0]); - /* fall through */ - - case PPTP_SET_LINK_INFO: - /* only need to NAT in case PAC is behind NAT box */ - case PPTP_START_SESSION_REQUEST: - case PPTP_START_SESSION_REPLY: - case PPTP_STOP_SESSION_REQUEST: - case PPTP_STOP_SESSION_REPLY: - case PPTP_ECHO_REQUEST: - case PPTP_ECHO_REPLY: - /* no need to alter packet */ - return NF_ACCEPT; - } - - /* only OUT_CALL_REQUEST, IN_CALL_REPLY, CALL_CLEAR_REQUEST pass - * down to here */ - DEBUGP("altering call id from 0x%04x to 0x%04x\n", - ntohs(REQ_CID(pptpReq, cid_off)), ntohs(new_callid)); - - /* mangle packet */ - if (ip_nat_mangle_tcp_packet(pskb, ct, ctinfo, - cid_off + sizeof(struct pptp_pkt_hdr) + - sizeof(struct PptpControlHeader), - sizeof(new_callid), (char *)&new_callid, - sizeof(new_callid)) == 0) - return NF_DROP; - - return NF_ACCEPT; -} - -static void -pptp_exp_gre(struct ip_conntrack_expect *expect_orig, - struct ip_conntrack_expect *expect_reply) -{ - struct ip_conntrack *ct = expect_orig->master; - struct ip_ct_pptp_master *ct_pptp_info = &ct->help.ct_pptp_info; - struct ip_nat_pptp *nat_pptp_info = &ct->nat.help.nat_pptp_info; - - /* save original PAC call ID in nat_info */ - nat_pptp_info->pac_call_id = ct_pptp_info->pac_call_id; - - /* alter expectation for PNS->PAC direction */ - expect_orig->saved_proto.gre.key = ct_pptp_info->pns_call_id; - expect_orig->tuple.src.u.gre.key = nat_pptp_info->pns_call_id; - expect_orig->tuple.dst.u.gre.key = ct_pptp_info->pac_call_id; - expect_orig->dir = IP_CT_DIR_ORIGINAL; - - /* alter expectation for PAC->PNS direction */ - expect_reply->saved_proto.gre.key = nat_pptp_info->pns_call_id; - expect_reply->tuple.src.u.gre.key = nat_pptp_info->pac_call_id; - expect_reply->tuple.dst.u.gre.key = ct_pptp_info->pns_call_id; - expect_reply->dir = IP_CT_DIR_REPLY; -} - -/* inbound packets == from PAC to PNS */ -static int -pptp_inbound_pkt(struct sk_buff **pskb, - struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - struct PptpControlHeader *ctlh, - union pptp_ctrl_union *pptpReq) -{ - struct ip_nat_pptp *nat_pptp_info = &ct->nat.help.nat_pptp_info; - u_int16_t msg; - __be16 new_pcid; - unsigned int pcid_off; - - new_pcid = nat_pptp_info->pns_call_id; - - switch (msg = ntohs(ctlh->messageType)) { - case PPTP_OUT_CALL_REPLY: - pcid_off = offsetof(union pptp_ctrl_union, ocack.peersCallID); - break; - case PPTP_IN_CALL_CONNECT: - pcid_off = offsetof(union pptp_ctrl_union, iccon.peersCallID); - break; - case PPTP_IN_CALL_REQUEST: - /* only need to nat in case PAC is behind NAT box */ - return NF_ACCEPT; - case PPTP_WAN_ERROR_NOTIFY: - pcid_off = offsetof(union pptp_ctrl_union, wanerr.peersCallID); - break; - case PPTP_CALL_DISCONNECT_NOTIFY: - pcid_off = offsetof(union pptp_ctrl_union, disc.callID); - break; - case PPTP_SET_LINK_INFO: - pcid_off = offsetof(union pptp_ctrl_union, setlink.peersCallID); - break; - - default: - DEBUGP("unknown inbound packet %s\n", (msg <= PPTP_MSG_MAX)? - pptp_msg_name[msg]:pptp_msg_name[0]); - /* fall through */ - - case PPTP_START_SESSION_REQUEST: - case PPTP_START_SESSION_REPLY: - case PPTP_STOP_SESSION_REQUEST: - case PPTP_STOP_SESSION_REPLY: - case PPTP_ECHO_REQUEST: - case PPTP_ECHO_REPLY: - /* no need to alter packet */ - return NF_ACCEPT; - } - - /* only OUT_CALL_REPLY, IN_CALL_CONNECT, IN_CALL_REQUEST, - * WAN_ERROR_NOTIFY, CALL_DISCONNECT_NOTIFY pass down here */ - - /* mangle packet */ - DEBUGP("altering peer call id from 0x%04x to 0x%04x\n", - ntohs(REQ_CID(pptpReq, pcid_off)), ntohs(new_pcid)); - - if (ip_nat_mangle_tcp_packet(pskb, ct, ctinfo, - pcid_off + sizeof(struct pptp_pkt_hdr) + - sizeof(struct PptpControlHeader), - sizeof(new_pcid), (char *)&new_pcid, - sizeof(new_pcid)) == 0) - return NF_DROP; - return NF_ACCEPT; -} - - -extern int __init ip_nat_proto_gre_init(void); -extern void __exit ip_nat_proto_gre_fini(void); - -static int __init ip_nat_helper_pptp_init(void) -{ - int ret; - - DEBUGP("%s: registering NAT helper\n", __FILE__); - - ret = ip_nat_proto_gre_init(); - if (ret < 0) - return ret; - - BUG_ON(rcu_dereference(ip_nat_pptp_hook_outbound)); - rcu_assign_pointer(ip_nat_pptp_hook_outbound, pptp_outbound_pkt); - - BUG_ON(rcu_dereference(ip_nat_pptp_hook_inbound)); - rcu_assign_pointer(ip_nat_pptp_hook_inbound, pptp_inbound_pkt); - - BUG_ON(rcu_dereference(ip_nat_pptp_hook_exp_gre)); - rcu_assign_pointer(ip_nat_pptp_hook_exp_gre, pptp_exp_gre); - - BUG_ON(rcu_dereference(ip_nat_pptp_hook_expectfn)); - rcu_assign_pointer(ip_nat_pptp_hook_expectfn, pptp_nat_expected); - - printk("ip_nat_pptp version %s loaded\n", IP_NAT_PPTP_VERSION); - return 0; -} - -static void __exit ip_nat_helper_pptp_fini(void) -{ - DEBUGP("cleanup_module\n" ); - - rcu_assign_pointer(ip_nat_pptp_hook_expectfn, NULL); - rcu_assign_pointer(ip_nat_pptp_hook_exp_gre, NULL); - rcu_assign_pointer(ip_nat_pptp_hook_inbound, NULL); - rcu_assign_pointer(ip_nat_pptp_hook_outbound, NULL); - synchronize_rcu(); - - ip_nat_proto_gre_fini(); - - printk("ip_nat_pptp version %s unloaded\n", IP_NAT_PPTP_VERSION); -} - -module_init(ip_nat_helper_pptp_init); -module_exit(ip_nat_helper_pptp_fini); diff -puN net/ipv4/netfilter/ip_nat_irc.c~git-net /dev/null --- a/net/ipv4/netfilter/ip_nat_irc.c +++ /dev/null @@ -1,122 +0,0 @@ -/* IRC extension for TCP NAT alteration. - * (C) 2000-2001 by Harald Welte - * (C) 2004 Rusty Russell IBM Corporation - * based on a copy of RR's ip_nat_ftp.c - * - * ip_nat_irc.c,v 1.16 2001/12/06 07:42:10 laforge Exp - * - * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License - * as published by the Free Software Foundation; either version - * 2 of the License, or (at your option) any later version. - */ - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#if 0 -#define DEBUGP printk -#else -#define DEBUGP(format, args...) -#endif - -MODULE_AUTHOR("Harald Welte "); -MODULE_DESCRIPTION("IRC (DCC) NAT helper"); -MODULE_LICENSE("GPL"); - -static unsigned int help(struct sk_buff **pskb, - enum ip_conntrack_info ctinfo, - unsigned int matchoff, - unsigned int matchlen, - struct ip_conntrack_expect *exp) -{ - u_int16_t port; - unsigned int ret; - - /* "4294967296 65635 " */ - char buffer[18]; - - DEBUGP("IRC_NAT: info (seq %u + %u) in %u\n", - expect->seq, exp_irc_info->len, - ntohl(tcph->seq)); - - /* Reply comes from server. */ - exp->saved_proto.tcp.port = exp->tuple.dst.u.tcp.port; - exp->dir = IP_CT_DIR_REPLY; - - /* When you see the packet, we need to NAT it the same as the - * this one. */ - exp->expectfn = ip_nat_follow_master; - - /* Try to get same port: if not, try to change it. */ - for (port = ntohs(exp->saved_proto.tcp.port); port != 0; port++) { - exp->tuple.dst.u.tcp.port = htons(port); - if (ip_conntrack_expect_related(exp) == 0) - break; - } - - if (port == 0) - return NF_DROP; - - /* strlen("\1DCC CHAT chat AAAAAAAA P\1\n")=27 - * strlen("\1DCC SCHAT chat AAAAAAAA P\1\n")=28 - * strlen("\1DCC SEND F AAAAAAAA P S\1\n")=26 - * strlen("\1DCC MOVE F AAAAAAAA P S\1\n")=26 - * strlen("\1DCC TSEND F AAAAAAAA P S\1\n")=27 - * AAAAAAAAA: bound addr (1.0.0.0==16777216, min 8 digits, - * 255.255.255.255==4294967296, 10 digits) - * P: bound port (min 1 d, max 5d (65635)) - * F: filename (min 1 d ) - * S: size (min 1 d ) - * 0x01, \n: terminators - */ - - /* AAA = "us", ie. where server normally talks to. */ - sprintf(buffer, "%u %u", - ntohl(exp->master->tuplehash[IP_CT_DIR_REPLY].tuple.dst.ip), - port); - DEBUGP("ip_nat_irc: Inserting '%s' == %u.%u.%u.%u, port %u\n", - buffer, NIPQUAD(exp->tuple.src.ip), port); - - ret = ip_nat_mangle_tcp_packet(pskb, exp->master, ctinfo, - matchoff, matchlen, buffer, - strlen(buffer)); - if (ret != NF_ACCEPT) - ip_conntrack_unexpect_related(exp); - return ret; -} - -static void __exit ip_nat_irc_fini(void) -{ - rcu_assign_pointer(ip_nat_irc_hook, NULL); - synchronize_rcu(); -} - -static int __init ip_nat_irc_init(void) -{ - BUG_ON(rcu_dereference(ip_nat_irc_hook)); - rcu_assign_pointer(ip_nat_irc_hook, help); - return 0; -} - -/* Prior to 2.6.11, we had a ports param. No longer, but don't break users. */ -static int warn_set(const char *val, struct kernel_param *kp) -{ - printk(KERN_INFO KBUILD_MODNAME - ": kernel >= 2.6.10 only uses 'ports' for conntrack modules\n"); - return 0; -} -module_param_call(ports, warn_set, NULL, NULL, 0); - -module_init(ip_nat_irc_init); -module_exit(ip_nat_irc_fini); diff -puN net/ipv4/netfilter/ip_nat_proto_gre.c~git-net /dev/null --- a/net/ipv4/netfilter/ip_nat_proto_gre.c +++ /dev/null @@ -1,174 +0,0 @@ -/* - * ip_nat_proto_gre.c - Version 2.0 - * - * NAT protocol helper module for GRE. - * - * GRE is a generic encapsulation protocol, which is generally not very - * suited for NAT, as it has no protocol-specific part as port numbers. - * - * It has an optional key field, which may help us distinguishing two - * connections between the same two hosts. - * - * GRE is defined in RFC 1701 and RFC 1702, as well as RFC 2784 - * - * PPTP is built on top of a modified version of GRE, and has a mandatory - * field called "CallID", which serves us for the same purpose as the key - * field in plain GRE. - * - * Documentation about PPTP can be found in RFC 2637 - * - * (C) 2000-2005 by Harald Welte - * - * Development of this code funded by Astaro AG (http://www.astaro.com/) - * - */ - -#include -#include -#include -#include -#include -#include - -MODULE_LICENSE("GPL"); -MODULE_AUTHOR("Harald Welte "); -MODULE_DESCRIPTION("Netfilter NAT protocol helper module for GRE"); - -#if 0 -#define DEBUGP(format, args...) printk(KERN_DEBUG "%s:%s: " format, __FILE__, \ - __FUNCTION__, ## args) -#else -#define DEBUGP(x, args...) -#endif - -/* is key in given range between min and max */ -static int -gre_in_range(const struct ip_conntrack_tuple *tuple, - enum ip_nat_manip_type maniptype, - const union ip_conntrack_manip_proto *min, - const union ip_conntrack_manip_proto *max) -{ - __be16 key; - - if (maniptype == IP_NAT_MANIP_SRC) - key = tuple->src.u.gre.key; - else - key = tuple->dst.u.gre.key; - - return ntohs(key) >= ntohs(min->gre.key) - && ntohs(key) <= ntohs(max->gre.key); -} - -/* generate unique tuple ... */ -static int -gre_unique_tuple(struct ip_conntrack_tuple *tuple, - const struct ip_nat_range *range, - enum ip_nat_manip_type maniptype, - const struct ip_conntrack *conntrack) -{ - static u_int16_t key; - __be16 *keyptr; - unsigned int min, i, range_size; - - if (maniptype == IP_NAT_MANIP_SRC) - keyptr = &tuple->src.u.gre.key; - else - keyptr = &tuple->dst.u.gre.key; - - if (!(range->flags & IP_NAT_RANGE_PROTO_SPECIFIED)) { - DEBUGP("%p: NATing GRE PPTP\n", conntrack); - min = 1; - range_size = 0xffff; - } else { - min = ntohs(range->min.gre.key); - range_size = ntohs(range->max.gre.key) - min + 1; - } - - DEBUGP("min = %u, range_size = %u\n", min, range_size); - - for (i = 0; i < range_size; i++, key++) { - *keyptr = htons(min + key % range_size); - if (!ip_nat_used_tuple(tuple, conntrack)) - return 1; - } - - DEBUGP("%p: no NAT mapping\n", conntrack); - - return 0; -} - -/* manipulate a GRE packet according to maniptype */ -static int -gre_manip_pkt(struct sk_buff **pskb, - unsigned int iphdroff, - const struct ip_conntrack_tuple *tuple, - enum ip_nat_manip_type maniptype) -{ - struct gre_hdr *greh; - struct gre_hdr_pptp *pgreh; - struct iphdr *iph = (struct iphdr *)((*pskb)->data + iphdroff); - unsigned int hdroff = iphdroff + iph->ihl*4; - - /* pgreh includes two optional 32bit fields which are not required - * to be there. That's where the magic '8' comes from */ - if (!skb_make_writable(pskb, hdroff + sizeof(*pgreh)-8)) - return 0; - - greh = (void *)(*pskb)->data + hdroff; - pgreh = (struct gre_hdr_pptp *) greh; - - /* we only have destination manip of a packet, since 'source key' - * is not present in the packet itself */ - if (maniptype == IP_NAT_MANIP_DST) { - /* key manipulation is always dest */ - switch (greh->version) { - case 0: - if (!greh->key) { - DEBUGP("can't nat GRE w/o key\n"); - break; - } - if (greh->csum) { - /* FIXME: Never tested this code... */ - nf_proto_csum_replace4(gre_csum(greh), *pskb, - *(gre_key(greh)), - tuple->dst.u.gre.key, 0); - } - *(gre_key(greh)) = tuple->dst.u.gre.key; - break; - case GRE_VERSION_PPTP: - DEBUGP("call_id -> 0x%04x\n", - ntohs(tuple->dst.u.gre.key)); - pgreh->call_id = tuple->dst.u.gre.key; - break; - default: - DEBUGP("can't nat unknown GRE version\n"); - return 0; - break; - } - } - return 1; -} - -/* nat helper struct */ -static struct ip_nat_protocol gre = { - .name = "GRE", - .protonum = IPPROTO_GRE, - .manip_pkt = gre_manip_pkt, - .in_range = gre_in_range, - .unique_tuple = gre_unique_tuple, -#if defined(CONFIG_IP_NF_CONNTRACK_NETLINK) || \ - defined(CONFIG_IP_NF_CONNTRACK_NETLINK_MODULE) - .range_to_nfattr = ip_nat_port_range_to_nfattr, - .nfattr_to_range = ip_nat_port_nfattr_to_range, -#endif -}; - -int __init ip_nat_proto_gre_init(void) -{ - return ip_nat_protocol_register(&gre); -} - -void __exit ip_nat_proto_gre_fini(void) -{ - ip_nat_protocol_unregister(&gre); -} diff -puN net/ipv4/netfilter/ip_nat_proto_icmp.c~git-net /dev/null --- a/net/ipv4/netfilter/ip_nat_proto_icmp.c +++ /dev/null @@ -1,87 +0,0 @@ -/* (C) 1999-2001 Paul `Rusty' Russell - * (C) 2002-2004 Netfilter Core Team - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - */ - -#include -#include -#include -#include -#include -#include - -#include -#include -#include -#include - -static int -icmp_in_range(const struct ip_conntrack_tuple *tuple, - enum ip_nat_manip_type maniptype, - const union ip_conntrack_manip_proto *min, - const union ip_conntrack_manip_proto *max) -{ - return ntohs(tuple->src.u.icmp.id) >= ntohs(min->icmp.id) && - ntohs(tuple->src.u.icmp.id) <= ntohs(max->icmp.id); -} - -static int -icmp_unique_tuple(struct ip_conntrack_tuple *tuple, - const struct ip_nat_range *range, - enum ip_nat_manip_type maniptype, - const struct ip_conntrack *conntrack) -{ - static u_int16_t id; - unsigned int range_size; - unsigned int i; - - range_size = ntohs(range->max.icmp.id) - ntohs(range->min.icmp.id) + 1; - /* If no range specified... */ - if (!(range->flags & IP_NAT_RANGE_PROTO_SPECIFIED)) - range_size = 0xFFFF; - - for (i = 0; i < range_size; i++, id++) { - tuple->src.u.icmp.id = htons(ntohs(range->min.icmp.id) + - (id % range_size)); - if (!ip_nat_used_tuple(tuple, conntrack)) - return 1; - } - return 0; -} - -static int -icmp_manip_pkt(struct sk_buff **pskb, - unsigned int iphdroff, - const struct ip_conntrack_tuple *tuple, - enum ip_nat_manip_type maniptype) -{ - struct iphdr *iph = (struct iphdr *)((*pskb)->data + iphdroff); - struct icmphdr *hdr; - unsigned int hdroff = iphdroff + iph->ihl*4; - - if (!skb_make_writable(pskb, hdroff + sizeof(*hdr))) - return 0; - - hdr = (struct icmphdr *)((*pskb)->data + hdroff); - nf_proto_csum_replace2(&hdr->checksum, *pskb, - hdr->un.echo.id, tuple->src.u.icmp.id, 0); - hdr->un.echo.id = tuple->src.u.icmp.id; - return 1; -} - -struct ip_nat_protocol ip_nat_protocol_icmp = { - .name = "ICMP", - .protonum = IPPROTO_ICMP, - .me = THIS_MODULE, - .manip_pkt = icmp_manip_pkt, - .in_range = icmp_in_range, - .unique_tuple = icmp_unique_tuple, -#if defined(CONFIG_IP_NF_CONNTRACK_NETLINK) || \ - defined(CONFIG_IP_NF_CONNTRACK_NETLINK_MODULE) - .range_to_nfattr = ip_nat_port_range_to_nfattr, - .nfattr_to_range = ip_nat_port_nfattr_to_range, -#endif -}; diff -puN net/ipv4/netfilter/ip_nat_proto_tcp.c~git-net /dev/null --- a/net/ipv4/netfilter/ip_nat_proto_tcp.c +++ /dev/null @@ -1,154 +0,0 @@ -/* (C) 1999-2001 Paul `Rusty' Russell - * (C) 2002-2004 Netfilter Core Team - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - */ - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -static int -tcp_in_range(const struct ip_conntrack_tuple *tuple, - enum ip_nat_manip_type maniptype, - const union ip_conntrack_manip_proto *min, - const union ip_conntrack_manip_proto *max) -{ - __be16 port; - - if (maniptype == IP_NAT_MANIP_SRC) - port = tuple->src.u.tcp.port; - else - port = tuple->dst.u.tcp.port; - - return ntohs(port) >= ntohs(min->tcp.port) - && ntohs(port) <= ntohs(max->tcp.port); -} - -static int -tcp_unique_tuple(struct ip_conntrack_tuple *tuple, - const struct ip_nat_range *range, - enum ip_nat_manip_type maniptype, - const struct ip_conntrack *conntrack) -{ - static u_int16_t port; - __be16 *portptr; - unsigned int range_size, min, i; - - if (maniptype == IP_NAT_MANIP_SRC) - portptr = &tuple->src.u.tcp.port; - else - portptr = &tuple->dst.u.tcp.port; - - /* If no range specified... */ - if (!(range->flags & IP_NAT_RANGE_PROTO_SPECIFIED)) { - /* If it's dst rewrite, can't change port */ - if (maniptype == IP_NAT_MANIP_DST) - return 0; - - /* Map privileged onto privileged. */ - if (ntohs(*portptr) < 1024) { - /* Loose convention: >> 512 is credential passing */ - if (ntohs(*portptr)<512) { - min = 1; - range_size = 511 - min + 1; - } else { - min = 600; - range_size = 1023 - min + 1; - } - } else { - min = 1024; - range_size = 65535 - 1024 + 1; - } - } else { - min = ntohs(range->min.tcp.port); - range_size = ntohs(range->max.tcp.port) - min + 1; - } - - /* Start from random port to avoid prediction */ - if (range->flags & IP_NAT_RANGE_PROTO_RANDOM) - port = net_random(); - - for (i = 0; i < range_size; i++, port++) { - *portptr = htons(min + port % range_size); - if (!ip_nat_used_tuple(tuple, conntrack)) { - return 1; - } - } - return 0; -} - -static int -tcp_manip_pkt(struct sk_buff **pskb, - unsigned int iphdroff, - const struct ip_conntrack_tuple *tuple, - enum ip_nat_manip_type maniptype) -{ - struct iphdr *iph = (struct iphdr *)((*pskb)->data + iphdroff); - struct tcphdr *hdr; - unsigned int hdroff = iphdroff + iph->ihl*4; - __be32 oldip, newip; - __be16 *portptr, newport, oldport; - int hdrsize = 8; /* TCP connection tracking guarantees this much */ - - /* this could be a inner header returned in icmp packet; in such - cases we cannot update the checksum field since it is outside of - the 8 bytes of transport layer headers we are guaranteed */ - if ((*pskb)->len >= hdroff + sizeof(struct tcphdr)) - hdrsize = sizeof(struct tcphdr); - - if (!skb_make_writable(pskb, hdroff + hdrsize)) - return 0; - - iph = (struct iphdr *)((*pskb)->data + iphdroff); - hdr = (struct tcphdr *)((*pskb)->data + hdroff); - - if (maniptype == IP_NAT_MANIP_SRC) { - /* Get rid of src ip and src pt */ - oldip = iph->saddr; - newip = tuple->src.ip; - newport = tuple->src.u.tcp.port; - portptr = &hdr->source; - } else { - /* Get rid of dst ip and dst pt */ - oldip = iph->daddr; - newip = tuple->dst.ip; - newport = tuple->dst.u.tcp.port; - portptr = &hdr->dest; - } - - oldport = *portptr; - *portptr = newport; - - if (hdrsize < sizeof(*hdr)) - return 1; - - nf_proto_csum_replace4(&hdr->check, *pskb, oldip, newip, 1); - nf_proto_csum_replace2(&hdr->check, *pskb, oldport, newport, 0); - return 1; -} - -struct ip_nat_protocol ip_nat_protocol_tcp = { - .name = "TCP", - .protonum = IPPROTO_TCP, - .me = THIS_MODULE, - .manip_pkt = tcp_manip_pkt, - .in_range = tcp_in_range, - .unique_tuple = tcp_unique_tuple, -#if defined(CONFIG_IP_NF_CONNTRACK_NETLINK) || \ - defined(CONFIG_IP_NF_CONNTRACK_NETLINK_MODULE) - .range_to_nfattr = ip_nat_port_range_to_nfattr, - .nfattr_to_range = ip_nat_port_nfattr_to_range, -#endif -}; diff -puN net/ipv4/netfilter/ip_nat_proto_udp.c~git-net /dev/null --- a/net/ipv4/netfilter/ip_nat_proto_udp.c +++ /dev/null @@ -1,144 +0,0 @@ -/* (C) 1999-2001 Paul `Rusty' Russell - * (C) 2002-2004 Netfilter Core Team - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - */ - -#include -#include -#include -#include -#include -#include -#include - -#include -#include -#include -#include - -static int -udp_in_range(const struct ip_conntrack_tuple *tuple, - enum ip_nat_manip_type maniptype, - const union ip_conntrack_manip_proto *min, - const union ip_conntrack_manip_proto *max) -{ - __be16 port; - - if (maniptype == IP_NAT_MANIP_SRC) - port = tuple->src.u.udp.port; - else - port = tuple->dst.u.udp.port; - - return ntohs(port) >= ntohs(min->udp.port) - && ntohs(port) <= ntohs(max->udp.port); -} - -static int -udp_unique_tuple(struct ip_conntrack_tuple *tuple, - const struct ip_nat_range *range, - enum ip_nat_manip_type maniptype, - const struct ip_conntrack *conntrack) -{ - static u_int16_t port; - __be16 *portptr; - unsigned int range_size, min, i; - - if (maniptype == IP_NAT_MANIP_SRC) - portptr = &tuple->src.u.udp.port; - else - portptr = &tuple->dst.u.udp.port; - - /* If no range specified... */ - if (!(range->flags & IP_NAT_RANGE_PROTO_SPECIFIED)) { - /* If it's dst rewrite, can't change port */ - if (maniptype == IP_NAT_MANIP_DST) - return 0; - - if (ntohs(*portptr) < 1024) { - /* Loose convention: >> 512 is credential passing */ - if (ntohs(*portptr)<512) { - min = 1; - range_size = 511 - min + 1; - } else { - min = 600; - range_size = 1023 - min + 1; - } - } else { - min = 1024; - range_size = 65535 - 1024 + 1; - } - } else { - min = ntohs(range->min.udp.port); - range_size = ntohs(range->max.udp.port) - min + 1; - } - - /* Start from random port to avoid prediction */ - if (range->flags & IP_NAT_RANGE_PROTO_RANDOM) - port = net_random(); - - for (i = 0; i < range_size; i++, port++) { - *portptr = htons(min + port % range_size); - if (!ip_nat_used_tuple(tuple, conntrack)) - return 1; - } - return 0; -} - -static int -udp_manip_pkt(struct sk_buff **pskb, - unsigned int iphdroff, - const struct ip_conntrack_tuple *tuple, - enum ip_nat_manip_type maniptype) -{ - struct iphdr *iph = (struct iphdr *)((*pskb)->data + iphdroff); - struct udphdr *hdr; - unsigned int hdroff = iphdroff + iph->ihl*4; - __be32 oldip, newip; - __be16 *portptr, newport; - - if (!skb_make_writable(pskb, hdroff + sizeof(*hdr))) - return 0; - - iph = (struct iphdr *)((*pskb)->data + iphdroff); - hdr = (struct udphdr *)((*pskb)->data + hdroff); - - if (maniptype == IP_NAT_MANIP_SRC) { - /* Get rid of src ip and src pt */ - oldip = iph->saddr; - newip = tuple->src.ip; - newport = tuple->src.u.udp.port; - portptr = &hdr->source; - } else { - /* Get rid of dst ip and dst pt */ - oldip = iph->daddr; - newip = tuple->dst.ip; - newport = tuple->dst.u.udp.port; - portptr = &hdr->dest; - } - - if (hdr->check || (*pskb)->ip_summed == CHECKSUM_PARTIAL) { - nf_proto_csum_replace4(&hdr->check, *pskb, oldip, newip, 1); - nf_proto_csum_replace2(&hdr->check, *pskb, *portptr, newport, 0); - if (!hdr->check) - hdr->check = CSUM_MANGLED_0; - } - *portptr = newport; - return 1; -} - -struct ip_nat_protocol ip_nat_protocol_udp = { - .name = "UDP", - .protonum = IPPROTO_UDP, - .me = THIS_MODULE, - .manip_pkt = udp_manip_pkt, - .in_range = udp_in_range, - .unique_tuple = udp_unique_tuple, -#if defined(CONFIG_IP_NF_CONNTRACK_NETLINK) || \ - defined(CONFIG_IP_NF_CONNTRACK_NETLINK_MODULE) - .range_to_nfattr = ip_nat_port_range_to_nfattr, - .nfattr_to_range = ip_nat_port_nfattr_to_range, -#endif -}; diff -puN net/ipv4/netfilter/ip_nat_proto_unknown.c~git-net /dev/null --- a/net/ipv4/netfilter/ip_nat_proto_unknown.c +++ /dev/null @@ -1,55 +0,0 @@ -/* The "unknown" protocol. This is what is used for protocols we - * don't understand. It's returned by ip_ct_find_proto(). - */ - -/* (C) 1999-2001 Paul `Rusty' Russell - * (C) 2002-2004 Netfilter Core Team - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - */ - -#include -#include -#include -#include - -#include -#include -#include - -static int unknown_in_range(const struct ip_conntrack_tuple *tuple, - enum ip_nat_manip_type manip_type, - const union ip_conntrack_manip_proto *min, - const union ip_conntrack_manip_proto *max) -{ - return 1; -} - -static int unknown_unique_tuple(struct ip_conntrack_tuple *tuple, - const struct ip_nat_range *range, - enum ip_nat_manip_type maniptype, - const struct ip_conntrack *conntrack) -{ - /* Sorry: we can't help you; if it's not unique, we can't frob - anything. */ - return 0; -} - -static int -unknown_manip_pkt(struct sk_buff **pskb, - unsigned int iphdroff, - const struct ip_conntrack_tuple *tuple, - enum ip_nat_manip_type maniptype) -{ - return 1; -} - -struct ip_nat_protocol ip_nat_unknown_protocol = { - .name = "unknown", - /* .me isn't set: getting a ref to this cannot fail. */ - .manip_pkt = unknown_manip_pkt, - .in_range = unknown_in_range, - .unique_tuple = unknown_unique_tuple, -}; diff -puN net/ipv4/netfilter/ip_nat_rule.c~git-net /dev/null --- a/net/ipv4/netfilter/ip_nat_rule.c +++ /dev/null @@ -1,314 +0,0 @@ -/* (C) 1999-2001 Paul `Rusty' Russell - * (C) 2002-2004 Netfilter Core Team - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - */ - -/* Everything about the rules for NAT. */ -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#include -#include -#include -#include - -#if 0 -#define DEBUGP printk -#else -#define DEBUGP(format, args...) -#endif - -#define NAT_VALID_HOOKS ((1<range[0], hooknum); -} - -/* Before 2.6.11 we did implicit source NAT if required. Warn about change. */ -static void warn_if_extra_mangle(__be32 dstip, __be32 srcip) -{ - static int warned = 0; - struct flowi fl = { .nl_u = { .ip4_u = { .daddr = dstip } } }; - struct rtable *rt; - - if (ip_route_output_key(&rt, &fl) != 0) - return; - - if (rt->rt_src != srcip && !warned) { - printk("NAT: no longer support implicit source local NAT\n"); - printk("NAT: packet src %u.%u.%u.%u -> dst %u.%u.%u.%u\n", - NIPQUAD(srcip), NIPQUAD(dstip)); - warned = 1; - } - ip_rt_put(rt); -} - -static unsigned int ipt_dnat_target(struct sk_buff **pskb, - const struct net_device *in, - const struct net_device *out, - unsigned int hooknum, - const struct xt_target *target, - const void *targinfo) -{ - struct ip_conntrack *ct; - enum ip_conntrack_info ctinfo; - const struct ip_nat_multi_range_compat *mr = targinfo; - - IP_NF_ASSERT(hooknum == NF_IP_PRE_ROUTING - || hooknum == NF_IP_LOCAL_OUT); - - ct = ip_conntrack_get(*pskb, &ctinfo); - - /* Connection must be valid and new. */ - IP_NF_ASSERT(ct && (ctinfo == IP_CT_NEW || ctinfo == IP_CT_RELATED)); - - if (hooknum == NF_IP_LOCAL_OUT - && mr->range[0].flags & IP_NAT_RANGE_MAP_IPS) - warn_if_extra_mangle((*pskb)->nh.iph->daddr, - mr->range[0].min_ip); - - return ip_nat_setup_info(ct, &mr->range[0], hooknum); -} - -static int ipt_snat_checkentry(const char *tablename, - const void *entry, - const struct xt_target *target, - void *targinfo, - unsigned int hook_mask) -{ - struct ip_nat_multi_range_compat *mr = targinfo; - - /* Must be a valid range */ - if (mr->rangesize != 1) { - printk("SNAT: multiple ranges no longer supported\n"); - return 0; - } - return 1; -} - -static int ipt_dnat_checkentry(const char *tablename, - const void *entry, - const struct xt_target *target, - void *targinfo, - unsigned int hook_mask) -{ - struct ip_nat_multi_range_compat *mr = targinfo; - - /* Must be a valid range */ - if (mr->rangesize != 1) { - printk("DNAT: multiple ranges no longer supported\n"); - return 0; - } - if (mr->range[0].flags & IP_NAT_RANGE_PROTO_RANDOM) { - printk("DNAT: port randomization not supported\n"); - return 0; - } - return 1; -} - -inline unsigned int -alloc_null_binding(struct ip_conntrack *conntrack, - struct ip_nat_info *info, - unsigned int hooknum) -{ - /* Force range to this IP; let proto decide mapping for - per-proto parts (hence not IP_NAT_RANGE_PROTO_SPECIFIED). - Use reply in case it's already been mangled (eg local packet). - */ - __be32 ip - = (HOOK2MANIP(hooknum) == IP_NAT_MANIP_SRC - ? conntrack->tuplehash[IP_CT_DIR_REPLY].tuple.dst.ip - : conntrack->tuplehash[IP_CT_DIR_REPLY].tuple.src.ip); - struct ip_nat_range range - = { IP_NAT_RANGE_MAP_IPS, ip, ip, { 0 }, { 0 } }; - - DEBUGP("Allocating NULL binding for %p (%u.%u.%u.%u)\n", conntrack, - NIPQUAD(ip)); - return ip_nat_setup_info(conntrack, &range, hooknum); -} - -unsigned int -alloc_null_binding_confirmed(struct ip_conntrack *conntrack, - struct ip_nat_info *info, - unsigned int hooknum) -{ - __be32 ip - = (HOOK2MANIP(hooknum) == IP_NAT_MANIP_SRC - ? conntrack->tuplehash[IP_CT_DIR_REPLY].tuple.dst.ip - : conntrack->tuplehash[IP_CT_DIR_REPLY].tuple.src.ip); - u_int16_t all - = (HOOK2MANIP(hooknum) == IP_NAT_MANIP_SRC - ? conntrack->tuplehash[IP_CT_DIR_REPLY].tuple.dst.u.all - : conntrack->tuplehash[IP_CT_DIR_REPLY].tuple.src.u.all); - struct ip_nat_range range - = { IP_NAT_RANGE_MAP_IPS, ip, ip, { all }, { all } }; - - DEBUGP("Allocating NULL binding for confirmed %p (%u.%u.%u.%u)\n", - conntrack, NIPQUAD(ip)); - return ip_nat_setup_info(conntrack, &range, hooknum); -} - -int ip_nat_rule_find(struct sk_buff **pskb, - unsigned int hooknum, - const struct net_device *in, - const struct net_device *out, - struct ip_conntrack *ct, - struct ip_nat_info *info) -{ - int ret; - - ret = ipt_do_table(pskb, hooknum, in, out, &nat_table); - - if (ret == NF_ACCEPT) { - if (!ip_nat_initialized(ct, HOOK2MANIP(hooknum))) - /* NUL mapping */ - ret = alloc_null_binding(ct, info, hooknum); - } - return ret; -} - -static struct xt_target ipt_snat_reg = { - .name = "SNAT", - .family = AF_INET, - .target = ipt_snat_target, - .targetsize = sizeof(struct ip_nat_multi_range_compat), - .table = "nat", - .hooks = 1 << NF_IP_POST_ROUTING, - .checkentry = ipt_snat_checkentry, -}; - -static struct xt_target ipt_dnat_reg = { - .name = "DNAT", - .family = AF_INET, - .target = ipt_dnat_target, - .targetsize = sizeof(struct ip_nat_multi_range_compat), - .table = "nat", - .hooks = (1 << NF_IP_PRE_ROUTING) | (1 << NF_IP_LOCAL_OUT), - .checkentry = ipt_dnat_checkentry, -}; - -int __init ip_nat_rule_init(void) -{ - int ret; - - ret = ipt_register_table(&nat_table, &nat_initial_table.repl); - if (ret != 0) - return ret; - ret = xt_register_target(&ipt_snat_reg); - if (ret != 0) - goto unregister_table; - - ret = xt_register_target(&ipt_dnat_reg); - if (ret != 0) - goto unregister_snat; - - return ret; - - unregister_snat: - xt_unregister_target(&ipt_snat_reg); - unregister_table: - xt_unregister_table(&nat_table); - - return ret; -} - -void ip_nat_rule_cleanup(void) -{ - xt_unregister_target(&ipt_dnat_reg); - xt_unregister_target(&ipt_snat_reg); - ipt_unregister_table(&nat_table); -} diff -puN net/ipv4/netfilter/ip_nat_sip.c~git-net /dev/null --- a/net/ipv4/netfilter/ip_nat_sip.c +++ /dev/null @@ -1,282 +0,0 @@ -/* SIP extension for UDP NAT alteration. - * - * (C) 2005 by Christian Hentschel - * based on RR's ip_nat_ftp.c and other modules. - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - */ - -#include -#include -#include -#include - -#include -#include -#include -#include -#include - -MODULE_LICENSE("GPL"); -MODULE_AUTHOR("Christian Hentschel "); -MODULE_DESCRIPTION("SIP NAT helper"); - -#if 0 -#define DEBUGP printk -#else -#define DEBUGP(format, args...) -#endif - -struct addr_map { - struct { - char src[sizeof("nnn.nnn.nnn.nnn:nnnnn")]; - char dst[sizeof("nnn.nnn.nnn.nnn:nnnnn")]; - unsigned int srclen, srciplen; - unsigned int dstlen, dstiplen; - } addr[IP_CT_DIR_MAX]; -}; - -static void addr_map_init(struct ip_conntrack *ct, struct addr_map *map) -{ - struct ip_conntrack_tuple *t; - enum ip_conntrack_dir dir; - unsigned int n; - - for (dir = 0; dir < IP_CT_DIR_MAX; dir++) { - t = &ct->tuplehash[dir].tuple; - - n = sprintf(map->addr[dir].src, "%u.%u.%u.%u", - NIPQUAD(t->src.ip)); - map->addr[dir].srciplen = n; - n += sprintf(map->addr[dir].src + n, ":%u", - ntohs(t->src.u.udp.port)); - map->addr[dir].srclen = n; - - n = sprintf(map->addr[dir].dst, "%u.%u.%u.%u", - NIPQUAD(t->dst.ip)); - map->addr[dir].dstiplen = n; - n += sprintf(map->addr[dir].dst + n, ":%u", - ntohs(t->dst.u.udp.port)); - map->addr[dir].dstlen = n; - } -} - -static int map_sip_addr(struct sk_buff **pskb, enum ip_conntrack_info ctinfo, - struct ip_conntrack *ct, const char **dptr, size_t dlen, - enum sip_header_pos pos, struct addr_map *map) -{ - enum ip_conntrack_dir dir = CTINFO2DIR(ctinfo); - unsigned int matchlen, matchoff, addrlen; - char *addr; - - if (ct_sip_get_info(*dptr, dlen, &matchoff, &matchlen, pos) <= 0) - return 1; - - if ((matchlen == map->addr[dir].srciplen || - matchlen == map->addr[dir].srclen) && - memcmp(*dptr + matchoff, map->addr[dir].src, matchlen) == 0) { - addr = map->addr[!dir].dst; - addrlen = map->addr[!dir].dstlen; - } else if ((matchlen == map->addr[dir].dstiplen || - matchlen == map->addr[dir].dstlen) && - memcmp(*dptr + matchoff, map->addr[dir].dst, matchlen) == 0) { - addr = map->addr[!dir].src; - addrlen = map->addr[!dir].srclen; - } else - return 1; - - if (!ip_nat_mangle_udp_packet(pskb, ct, ctinfo, - matchoff, matchlen, addr, addrlen)) - return 0; - *dptr = (*pskb)->data + (*pskb)->nh.iph->ihl*4 + sizeof(struct udphdr); - return 1; - -} - -static unsigned int ip_nat_sip(struct sk_buff **pskb, - enum ip_conntrack_info ctinfo, - struct ip_conntrack *ct, - const char **dptr) -{ - enum sip_header_pos pos; - struct addr_map map; - int dataoff, datalen; - - dataoff = (*pskb)->nh.iph->ihl*4 + sizeof(struct udphdr); - datalen = (*pskb)->len - dataoff; - if (datalen < sizeof("SIP/2.0") - 1) - return NF_DROP; - - addr_map_init(ct, &map); - - /* Basic rules: requests and responses. */ - if (strncmp(*dptr, "SIP/2.0", sizeof("SIP/2.0") - 1) != 0) { - /* 10.2: Constructing the REGISTER Request: - * - * The "userinfo" and "@" components of the SIP URI MUST NOT - * be present. - */ - if (datalen >= sizeof("REGISTER") - 1 && - strncmp(*dptr, "REGISTER", sizeof("REGISTER") - 1) == 0) - pos = POS_REG_REQ_URI; - else - pos = POS_REQ_URI; - - if (!map_sip_addr(pskb, ctinfo, ct, dptr, datalen, pos, &map)) - return NF_DROP; - } - - if (!map_sip_addr(pskb, ctinfo, ct, dptr, datalen, POS_FROM, &map) || - !map_sip_addr(pskb, ctinfo, ct, dptr, datalen, POS_TO, &map) || - !map_sip_addr(pskb, ctinfo, ct, dptr, datalen, POS_VIA, &map) || - !map_sip_addr(pskb, ctinfo, ct, dptr, datalen, POS_CONTACT, &map)) - return NF_DROP; - return NF_ACCEPT; -} - -static unsigned int mangle_sip_packet(struct sk_buff **pskb, - enum ip_conntrack_info ctinfo, - struct ip_conntrack *ct, - const char **dptr, size_t dlen, - char *buffer, int bufflen, - enum sip_header_pos pos) -{ - unsigned int matchlen, matchoff; - - if (ct_sip_get_info(*dptr, dlen, &matchoff, &matchlen, pos) <= 0) - return 0; - - if (!ip_nat_mangle_udp_packet(pskb, ct, ctinfo, - matchoff, matchlen, buffer, bufflen)) - return 0; - - /* We need to reload this. Thanks Patrick. */ - *dptr = (*pskb)->data + (*pskb)->nh.iph->ihl*4 + sizeof(struct udphdr); - return 1; -} - -static int mangle_content_len(struct sk_buff **pskb, - enum ip_conntrack_info ctinfo, - struct ip_conntrack *ct, - const char *dptr) -{ - unsigned int dataoff, matchoff, matchlen; - char buffer[sizeof("65536")]; - int bufflen; - - dataoff = (*pskb)->nh.iph->ihl*4 + sizeof(struct udphdr); - - /* Get actual SDP lenght */ - if (ct_sip_get_info(dptr, (*pskb)->len - dataoff, &matchoff, - &matchlen, POS_SDP_HEADER) > 0) { - - /* since ct_sip_get_info() give us a pointer passing 'v=' - we need to add 2 bytes in this count. */ - int c_len = (*pskb)->len - dataoff - matchoff + 2; - - /* Now, update SDP lenght */ - if (ct_sip_get_info(dptr, (*pskb)->len - dataoff, &matchoff, - &matchlen, POS_CONTENT) > 0) { - - bufflen = sprintf(buffer, "%u", c_len); - - return ip_nat_mangle_udp_packet(pskb, ct, ctinfo, - matchoff, matchlen, - buffer, bufflen); - } - } - return 0; -} - -static unsigned int mangle_sdp(struct sk_buff **pskb, - enum ip_conntrack_info ctinfo, - struct ip_conntrack *ct, - __be32 newip, u_int16_t port, - const char *dptr) -{ - char buffer[sizeof("nnn.nnn.nnn.nnn")]; - unsigned int dataoff, bufflen; - - dataoff = (*pskb)->nh.iph->ihl*4 + sizeof(struct udphdr); - - /* Mangle owner and contact info. */ - bufflen = sprintf(buffer, "%u.%u.%u.%u", NIPQUAD(newip)); - if (!mangle_sip_packet(pskb, ctinfo, ct, &dptr, (*pskb)->len - dataoff, - buffer, bufflen, POS_OWNER)) - return 0; - - if (!mangle_sip_packet(pskb, ctinfo, ct, &dptr, (*pskb)->len - dataoff, - buffer, bufflen, POS_CONNECTION)) - return 0; - - /* Mangle media port. */ - bufflen = sprintf(buffer, "%u", port); - if (!mangle_sip_packet(pskb, ctinfo, ct, &dptr, (*pskb)->len - dataoff, - buffer, bufflen, POS_MEDIA)) - return 0; - - return mangle_content_len(pskb, ctinfo, ct, dptr); -} - -/* So, this packet has hit the connection tracking matching code. - Mangle it, and change the expectation to match the new version. */ -static unsigned int ip_nat_sdp(struct sk_buff **pskb, - enum ip_conntrack_info ctinfo, - struct ip_conntrack_expect *exp, - const char *dptr) -{ - struct ip_conntrack *ct = exp->master; - enum ip_conntrack_dir dir = CTINFO2DIR(ctinfo); - __be32 newip; - u_int16_t port; - - DEBUGP("ip_nat_sdp():\n"); - - /* Connection will come from reply */ - newip = ct->tuplehash[!dir].tuple.dst.ip; - - exp->tuple.dst.ip = newip; - exp->saved_proto.udp.port = exp->tuple.dst.u.udp.port; - exp->dir = !dir; - - /* When you see the packet, we need to NAT it the same as the - this one. */ - exp->expectfn = ip_nat_follow_master; - - /* Try to get same port: if not, try to change it. */ - for (port = ntohs(exp->saved_proto.udp.port); port != 0; port++) { - exp->tuple.dst.u.udp.port = htons(port); - if (ip_conntrack_expect_related(exp) == 0) - break; - } - - if (port == 0) - return NF_DROP; - - if (!mangle_sdp(pskb, ctinfo, ct, newip, port, dptr)) { - ip_conntrack_unexpect_related(exp); - return NF_DROP; - } - return NF_ACCEPT; -} - -static void __exit fini(void) -{ - rcu_assign_pointer(ip_nat_sip_hook, NULL); - rcu_assign_pointer(ip_nat_sdp_hook, NULL); - synchronize_rcu(); -} - -static int __init init(void) -{ - BUG_ON(rcu_dereference(ip_nat_sip_hook)); - BUG_ON(rcu_dereference(ip_nat_sdp_hook)); - rcu_assign_pointer(ip_nat_sip_hook, ip_nat_sip); - rcu_assign_pointer(ip_nat_sdp_hook, ip_nat_sdp); - return 0; -} - -module_init(init); -module_exit(fini); diff -puN net/ipv4/netfilter/ip_nat_snmp_basic.c~git-net /dev/null --- a/net/ipv4/netfilter/ip_nat_snmp_basic.c +++ /dev/null @@ -1,1333 +0,0 @@ -/* - * ip_nat_snmp_basic.c - * - * Basic SNMP Application Layer Gateway - * - * This IP NAT module is intended for use with SNMP network - * discovery and monitoring applications where target networks use - * conflicting private address realms. - * - * Static NAT is used to remap the networks from the view of the network - * management system at the IP layer, and this module remaps some application - * layer addresses to match. - * - * The simplest form of ALG is performed, where only tagged IP addresses - * are modified. The module does not need to be MIB aware and only scans - * messages at the ASN.1/BER level. - * - * Currently, only SNMPv1 and SNMPv2 are supported. - * - * More information on ALG and associated issues can be found in - * RFC 2962 - * - * The ASB.1/BER parsing code is derived from the gxsnmp package by Gregory - * McLean & Jochen Friedrich, stripped down for use in the kernel. - * - * Copyright (c) 2000 RP Internet (www.rpi.net.au). - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License as published by - * the Free Software Foundation; either version 2 of the License, or - * (at your option) any later version. - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA - * - * Author: James Morris - * - * Updates: - * 2000-08-06: Convert to new helper API (Harald Welte). - * - */ -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -MODULE_LICENSE("GPL"); -MODULE_AUTHOR("James Morris "); -MODULE_DESCRIPTION("Basic SNMP Application Layer Gateway"); - -#define SNMP_PORT 161 -#define SNMP_TRAP_PORT 162 -#define NOCT1(n) (*(u8 *)n) - -static int debug; -static DEFINE_SPINLOCK(snmp_lock); - -/* - * Application layer address mapping mimics the NAT mapping, but - * only for the first octet in this case (a more flexible system - * can be implemented if needed). - */ -struct oct1_map -{ - u_int8_t from; - u_int8_t to; -}; - - -/***************************************************************************** - * - * Basic ASN.1 decoding routines (gxsnmp author Dirk Wisse) - * - *****************************************************************************/ - -/* Class */ -#define ASN1_UNI 0 /* Universal */ -#define ASN1_APL 1 /* Application */ -#define ASN1_CTX 2 /* Context */ -#define ASN1_PRV 3 /* Private */ - -/* Tag */ -#define ASN1_EOC 0 /* End Of Contents */ -#define ASN1_BOL 1 /* Boolean */ -#define ASN1_INT 2 /* Integer */ -#define ASN1_BTS 3 /* Bit String */ -#define ASN1_OTS 4 /* Octet String */ -#define ASN1_NUL 5 /* Null */ -#define ASN1_OJI 6 /* Object Identifier */ -#define ASN1_OJD 7 /* Object Description */ -#define ASN1_EXT 8 /* External */ -#define ASN1_SEQ 16 /* Sequence */ -#define ASN1_SET 17 /* Set */ -#define ASN1_NUMSTR 18 /* Numerical String */ -#define ASN1_PRNSTR 19 /* Printable String */ -#define ASN1_TEXSTR 20 /* Teletext String */ -#define ASN1_VIDSTR 21 /* Video String */ -#define ASN1_IA5STR 22 /* IA5 String */ -#define ASN1_UNITIM 23 /* Universal Time */ -#define ASN1_GENTIM 24 /* General Time */ -#define ASN1_GRASTR 25 /* Graphical String */ -#define ASN1_VISSTR 26 /* Visible String */ -#define ASN1_GENSTR 27 /* General String */ - -/* Primitive / Constructed methods*/ -#define ASN1_PRI 0 /* Primitive */ -#define ASN1_CON 1 /* Constructed */ - -/* - * Error codes. - */ -#define ASN1_ERR_NOERROR 0 -#define ASN1_ERR_DEC_EMPTY 2 -#define ASN1_ERR_DEC_EOC_MISMATCH 3 -#define ASN1_ERR_DEC_LENGTH_MISMATCH 4 -#define ASN1_ERR_DEC_BADVALUE 5 - -/* - * ASN.1 context. - */ -struct asn1_ctx -{ - int error; /* Error condition */ - unsigned char *pointer; /* Octet just to be decoded */ - unsigned char *begin; /* First octet */ - unsigned char *end; /* Octet after last octet */ -}; - -/* - * Octet string (not null terminated) - */ -struct asn1_octstr -{ - unsigned char *data; - unsigned int len; -}; - -static void asn1_open(struct asn1_ctx *ctx, - unsigned char *buf, - unsigned int len) -{ - ctx->begin = buf; - ctx->end = buf + len; - ctx->pointer = buf; - ctx->error = ASN1_ERR_NOERROR; -} - -static unsigned char asn1_octet_decode(struct asn1_ctx *ctx, unsigned char *ch) -{ - if (ctx->pointer >= ctx->end) { - ctx->error = ASN1_ERR_DEC_EMPTY; - return 0; - } - *ch = *(ctx->pointer)++; - return 1; -} - -static unsigned char asn1_tag_decode(struct asn1_ctx *ctx, unsigned int *tag) -{ - unsigned char ch; - - *tag = 0; - - do - { - if (!asn1_octet_decode(ctx, &ch)) - return 0; - *tag <<= 7; - *tag |= ch & 0x7F; - } while ((ch & 0x80) == 0x80); - return 1; -} - -static unsigned char asn1_id_decode(struct asn1_ctx *ctx, - unsigned int *cls, - unsigned int *con, - unsigned int *tag) -{ - unsigned char ch; - - if (!asn1_octet_decode(ctx, &ch)) - return 0; - - *cls = (ch & 0xC0) >> 6; - *con = (ch & 0x20) >> 5; - *tag = (ch & 0x1F); - - if (*tag == 0x1F) { - if (!asn1_tag_decode(ctx, tag)) - return 0; - } - return 1; -} - -static unsigned char asn1_length_decode(struct asn1_ctx *ctx, - unsigned int *def, - unsigned int *len) -{ - unsigned char ch, cnt; - - if (!asn1_octet_decode(ctx, &ch)) - return 0; - - if (ch == 0x80) - *def = 0; - else { - *def = 1; - - if (ch < 0x80) - *len = ch; - else { - cnt = (unsigned char) (ch & 0x7F); - *len = 0; - - while (cnt > 0) { - if (!asn1_octet_decode(ctx, &ch)) - return 0; - *len <<= 8; - *len |= ch; - cnt--; - } - } - } - return 1; -} - -static unsigned char asn1_header_decode(struct asn1_ctx *ctx, - unsigned char **eoc, - unsigned int *cls, - unsigned int *con, - unsigned int *tag) -{ - unsigned int def, len; - - if (!asn1_id_decode(ctx, cls, con, tag)) - return 0; - - def = len = 0; - if (!asn1_length_decode(ctx, &def, &len)) - return 0; - - if (def) - *eoc = ctx->pointer + len; - else - *eoc = NULL; - return 1; -} - -static unsigned char asn1_eoc_decode(struct asn1_ctx *ctx, unsigned char *eoc) -{ - unsigned char ch; - - if (eoc == 0) { - if (!asn1_octet_decode(ctx, &ch)) - return 0; - - if (ch != 0x00) { - ctx->error = ASN1_ERR_DEC_EOC_MISMATCH; - return 0; - } - - if (!asn1_octet_decode(ctx, &ch)) - return 0; - - if (ch != 0x00) { - ctx->error = ASN1_ERR_DEC_EOC_MISMATCH; - return 0; - } - return 1; - } else { - if (ctx->pointer != eoc) { - ctx->error = ASN1_ERR_DEC_LENGTH_MISMATCH; - return 0; - } - return 1; - } -} - -static unsigned char asn1_null_decode(struct asn1_ctx *ctx, unsigned char *eoc) -{ - ctx->pointer = eoc; - return 1; -} - -static unsigned char asn1_long_decode(struct asn1_ctx *ctx, - unsigned char *eoc, - long *integer) -{ - unsigned char ch; - unsigned int len; - - if (!asn1_octet_decode(ctx, &ch)) - return 0; - - *integer = (signed char) ch; - len = 1; - - while (ctx->pointer < eoc) { - if (++len > sizeof (long)) { - ctx->error = ASN1_ERR_DEC_BADVALUE; - return 0; - } - - if (!asn1_octet_decode(ctx, &ch)) - return 0; - - *integer <<= 8; - *integer |= ch; - } - return 1; -} - -static unsigned char asn1_uint_decode(struct asn1_ctx *ctx, - unsigned char *eoc, - unsigned int *integer) -{ - unsigned char ch; - unsigned int len; - - if (!asn1_octet_decode(ctx, &ch)) - return 0; - - *integer = ch; - if (ch == 0) len = 0; - else len = 1; - - while (ctx->pointer < eoc) { - if (++len > sizeof (unsigned int)) { - ctx->error = ASN1_ERR_DEC_BADVALUE; - return 0; - } - - if (!asn1_octet_decode(ctx, &ch)) - return 0; - - *integer <<= 8; - *integer |= ch; - } - return 1; -} - -static unsigned char asn1_ulong_decode(struct asn1_ctx *ctx, - unsigned char *eoc, - unsigned long *integer) -{ - unsigned char ch; - unsigned int len; - - if (!asn1_octet_decode(ctx, &ch)) - return 0; - - *integer = ch; - if (ch == 0) len = 0; - else len = 1; - - while (ctx->pointer < eoc) { - if (++len > sizeof (unsigned long)) { - ctx->error = ASN1_ERR_DEC_BADVALUE; - return 0; - } - - if (!asn1_octet_decode(ctx, &ch)) - return 0; - - *integer <<= 8; - *integer |= ch; - } - return 1; -} - -static unsigned char asn1_octets_decode(struct asn1_ctx *ctx, - unsigned char *eoc, - unsigned char **octets, - unsigned int *len) -{ - unsigned char *ptr; - - *len = 0; - - *octets = kmalloc(eoc - ctx->pointer, GFP_ATOMIC); - if (*octets == NULL) { - if (net_ratelimit()) - printk("OOM in bsalg (%d)\n", __LINE__); - return 0; - } - - ptr = *octets; - while (ctx->pointer < eoc) { - if (!asn1_octet_decode(ctx, (unsigned char *)ptr++)) { - kfree(*octets); - *octets = NULL; - return 0; - } - (*len)++; - } - return 1; -} - -static unsigned char asn1_subid_decode(struct asn1_ctx *ctx, - unsigned long *subid) -{ - unsigned char ch; - - *subid = 0; - - do { - if (!asn1_octet_decode(ctx, &ch)) - return 0; - - *subid <<= 7; - *subid |= ch & 0x7F; - } while ((ch & 0x80) == 0x80); - return 1; -} - -static unsigned char asn1_oid_decode(struct asn1_ctx *ctx, - unsigned char *eoc, - unsigned long **oid, - unsigned int *len) -{ - unsigned long subid; - unsigned int size; - unsigned long *optr; - - size = eoc - ctx->pointer + 1; - *oid = kmalloc(size * sizeof(unsigned long), GFP_ATOMIC); - if (*oid == NULL) { - if (net_ratelimit()) - printk("OOM in bsalg (%d)\n", __LINE__); - return 0; - } - - optr = *oid; - - if (!asn1_subid_decode(ctx, &subid)) { - kfree(*oid); - *oid = NULL; - return 0; - } - - if (subid < 40) { - optr [0] = 0; - optr [1] = subid; - } else if (subid < 80) { - optr [0] = 1; - optr [1] = subid - 40; - } else { - optr [0] = 2; - optr [1] = subid - 80; - } - - *len = 2; - optr += 2; - - while (ctx->pointer < eoc) { - if (++(*len) > size) { - ctx->error = ASN1_ERR_DEC_BADVALUE; - kfree(*oid); - *oid = NULL; - return 0; - } - - if (!asn1_subid_decode(ctx, optr++)) { - kfree(*oid); - *oid = NULL; - return 0; - } - } - return 1; -} - -/***************************************************************************** - * - * SNMP decoding routines (gxsnmp author Dirk Wisse) - * - *****************************************************************************/ - -/* SNMP Versions */ -#define SNMP_V1 0 -#define SNMP_V2C 1 -#define SNMP_V2 2 -#define SNMP_V3 3 - -/* Default Sizes */ -#define SNMP_SIZE_COMM 256 -#define SNMP_SIZE_OBJECTID 128 -#define SNMP_SIZE_BUFCHR 256 -#define SNMP_SIZE_BUFINT 128 -#define SNMP_SIZE_SMALLOBJECTID 16 - -/* Requests */ -#define SNMP_PDU_GET 0 -#define SNMP_PDU_NEXT 1 -#define SNMP_PDU_RESPONSE 2 -#define SNMP_PDU_SET 3 -#define SNMP_PDU_TRAP1 4 -#define SNMP_PDU_BULK 5 -#define SNMP_PDU_INFORM 6 -#define SNMP_PDU_TRAP2 7 - -/* Errors */ -#define SNMP_NOERROR 0 -#define SNMP_TOOBIG 1 -#define SNMP_NOSUCHNAME 2 -#define SNMP_BADVALUE 3 -#define SNMP_READONLY 4 -#define SNMP_GENERROR 5 -#define SNMP_NOACCESS 6 -#define SNMP_WRONGTYPE 7 -#define SNMP_WRONGLENGTH 8 -#define SNMP_WRONGENCODING 9 -#define SNMP_WRONGVALUE 10 -#define SNMP_NOCREATION 11 -#define SNMP_INCONSISTENTVALUE 12 -#define SNMP_RESOURCEUNAVAILABLE 13 -#define SNMP_COMMITFAILED 14 -#define SNMP_UNDOFAILED 15 -#define SNMP_AUTHORIZATIONERROR 16 -#define SNMP_NOTWRITABLE 17 -#define SNMP_INCONSISTENTNAME 18 - -/* General SNMP V1 Traps */ -#define SNMP_TRAP_COLDSTART 0 -#define SNMP_TRAP_WARMSTART 1 -#define SNMP_TRAP_LINKDOWN 2 -#define SNMP_TRAP_LINKUP 3 -#define SNMP_TRAP_AUTFAILURE 4 -#define SNMP_TRAP_EQPNEIGHBORLOSS 5 -#define SNMP_TRAP_ENTSPECIFIC 6 - -/* SNMPv1 Types */ -#define SNMP_NULL 0 -#define SNMP_INTEGER 1 /* l */ -#define SNMP_OCTETSTR 2 /* c */ -#define SNMP_DISPLAYSTR 2 /* c */ -#define SNMP_OBJECTID 3 /* ul */ -#define SNMP_IPADDR 4 /* uc */ -#define SNMP_COUNTER 5 /* ul */ -#define SNMP_GAUGE 6 /* ul */ -#define SNMP_TIMETICKS 7 /* ul */ -#define SNMP_OPAQUE 8 /* c */ - -/* Additional SNMPv2 Types */ -#define SNMP_UINTEGER 5 /* ul */ -#define SNMP_BITSTR 9 /* uc */ -#define SNMP_NSAP 10 /* uc */ -#define SNMP_COUNTER64 11 /* ul */ -#define SNMP_NOSUCHOBJECT 12 -#define SNMP_NOSUCHINSTANCE 13 -#define SNMP_ENDOFMIBVIEW 14 - -union snmp_syntax -{ - unsigned char uc[0]; /* 8 bit unsigned */ - char c[0]; /* 8 bit signed */ - unsigned long ul[0]; /* 32 bit unsigned */ - long l[0]; /* 32 bit signed */ -}; - -struct snmp_object -{ - unsigned long *id; - unsigned int id_len; - unsigned short type; - unsigned int syntax_len; - union snmp_syntax syntax; -}; - -struct snmp_request -{ - unsigned long id; - unsigned int error_status; - unsigned int error_index; -}; - -struct snmp_v1_trap -{ - unsigned long *id; - unsigned int id_len; - unsigned long ip_address; /* pointer */ - unsigned int general; - unsigned int specific; - unsigned long time; -}; - -/* SNMP types */ -#define SNMP_IPA 0 -#define SNMP_CNT 1 -#define SNMP_GGE 2 -#define SNMP_TIT 3 -#define SNMP_OPQ 4 -#define SNMP_C64 6 - -/* SNMP errors */ -#define SERR_NSO 0 -#define SERR_NSI 1 -#define SERR_EOM 2 - -static inline void mangle_address(unsigned char *begin, - unsigned char *addr, - const struct oct1_map *map, - __sum16 *check); -struct snmp_cnv -{ - unsigned int class; - unsigned int tag; - int syntax; -}; - -static struct snmp_cnv snmp_conv [] = -{ - {ASN1_UNI, ASN1_NUL, SNMP_NULL}, - {ASN1_UNI, ASN1_INT, SNMP_INTEGER}, - {ASN1_UNI, ASN1_OTS, SNMP_OCTETSTR}, - {ASN1_UNI, ASN1_OTS, SNMP_DISPLAYSTR}, - {ASN1_UNI, ASN1_OJI, SNMP_OBJECTID}, - {ASN1_APL, SNMP_IPA, SNMP_IPADDR}, - {ASN1_APL, SNMP_CNT, SNMP_COUNTER}, /* Counter32 */ - {ASN1_APL, SNMP_GGE, SNMP_GAUGE}, /* Gauge32 == Unsigned32 */ - {ASN1_APL, SNMP_TIT, SNMP_TIMETICKS}, - {ASN1_APL, SNMP_OPQ, SNMP_OPAQUE}, - - /* SNMPv2 data types and errors */ - {ASN1_UNI, ASN1_BTS, SNMP_BITSTR}, - {ASN1_APL, SNMP_C64, SNMP_COUNTER64}, - {ASN1_CTX, SERR_NSO, SNMP_NOSUCHOBJECT}, - {ASN1_CTX, SERR_NSI, SNMP_NOSUCHINSTANCE}, - {ASN1_CTX, SERR_EOM, SNMP_ENDOFMIBVIEW}, - {0, 0, -1} -}; - -static unsigned char snmp_tag_cls2syntax(unsigned int tag, - unsigned int cls, - unsigned short *syntax) -{ - struct snmp_cnv *cnv; - - cnv = snmp_conv; - - while (cnv->syntax != -1) { - if (cnv->tag == tag && cnv->class == cls) { - *syntax = cnv->syntax; - return 1; - } - cnv++; - } - return 0; -} - -static unsigned char snmp_object_decode(struct asn1_ctx *ctx, - struct snmp_object **obj) -{ - unsigned int cls, con, tag, len, idlen; - unsigned short type; - unsigned char *eoc, *end, *p; - unsigned long *lp, *id; - unsigned long ul; - long l; - - *obj = NULL; - id = NULL; - - if (!asn1_header_decode(ctx, &eoc, &cls, &con, &tag)) - return 0; - - if (cls != ASN1_UNI || con != ASN1_CON || tag != ASN1_SEQ) - return 0; - - if (!asn1_header_decode(ctx, &end, &cls, &con, &tag)) - return 0; - - if (cls != ASN1_UNI || con != ASN1_PRI || tag != ASN1_OJI) - return 0; - - if (!asn1_oid_decode(ctx, end, &id, &idlen)) - return 0; - - if (!asn1_header_decode(ctx, &end, &cls, &con, &tag)) { - kfree(id); - return 0; - } - - if (con != ASN1_PRI) { - kfree(id); - return 0; - } - - type = 0; - if (!snmp_tag_cls2syntax(tag, cls, &type)) { - kfree(id); - return 0; - } - - l = 0; - switch (type) { - case SNMP_INTEGER: - len = sizeof(long); - if (!asn1_long_decode(ctx, end, &l)) { - kfree(id); - return 0; - } - *obj = kmalloc(sizeof(struct snmp_object) + len, - GFP_ATOMIC); - if (*obj == NULL) { - kfree(id); - if (net_ratelimit()) - printk("OOM in bsalg (%d)\n", __LINE__); - return 0; - } - (*obj)->syntax.l[0] = l; - break; - case SNMP_OCTETSTR: - case SNMP_OPAQUE: - if (!asn1_octets_decode(ctx, end, &p, &len)) { - kfree(id); - return 0; - } - *obj = kmalloc(sizeof(struct snmp_object) + len, - GFP_ATOMIC); - if (*obj == NULL) { - kfree(id); - if (net_ratelimit()) - printk("OOM in bsalg (%d)\n", __LINE__); - return 0; - } - memcpy((*obj)->syntax.c, p, len); - kfree(p); - break; - case SNMP_NULL: - case SNMP_NOSUCHOBJECT: - case SNMP_NOSUCHINSTANCE: - case SNMP_ENDOFMIBVIEW: - len = 0; - *obj = kmalloc(sizeof(struct snmp_object), GFP_ATOMIC); - if (*obj == NULL) { - kfree(id); - if (net_ratelimit()) - printk("OOM in bsalg (%d)\n", __LINE__); - return 0; - } - if (!asn1_null_decode(ctx, end)) { - kfree(id); - kfree(*obj); - *obj = NULL; - return 0; - } - break; - case SNMP_OBJECTID: - if (!asn1_oid_decode(ctx, end, (unsigned long **)&lp, &len)) { - kfree(id); - return 0; - } - len *= sizeof(unsigned long); - *obj = kmalloc(sizeof(struct snmp_object) + len, GFP_ATOMIC); - if (*obj == NULL) { - kfree(lp); - kfree(id); - if (net_ratelimit()) - printk("OOM in bsalg (%d)\n", __LINE__); - return 0; - } - memcpy((*obj)->syntax.ul, lp, len); - kfree(lp); - break; - case SNMP_IPADDR: - if (!asn1_octets_decode(ctx, end, &p, &len)) { - kfree(id); - return 0; - } - if (len != 4) { - kfree(p); - kfree(id); - return 0; - } - *obj = kmalloc(sizeof(struct snmp_object) + len, GFP_ATOMIC); - if (*obj == NULL) { - kfree(p); - kfree(id); - if (net_ratelimit()) - printk("OOM in bsalg (%d)\n", __LINE__); - return 0; - } - memcpy((*obj)->syntax.uc, p, len); - kfree(p); - break; - case SNMP_COUNTER: - case SNMP_GAUGE: - case SNMP_TIMETICKS: - len = sizeof(unsigned long); - if (!asn1_ulong_decode(ctx, end, &ul)) { - kfree(id); - return 0; - } - *obj = kmalloc(sizeof(struct snmp_object) + len, GFP_ATOMIC); - if (*obj == NULL) { - kfree(id); - if (net_ratelimit()) - printk("OOM in bsalg (%d)\n", __LINE__); - return 0; - } - (*obj)->syntax.ul[0] = ul; - break; - default: - kfree(id); - return 0; - } - - (*obj)->syntax_len = len; - (*obj)->type = type; - (*obj)->id = id; - (*obj)->id_len = idlen; - - if (!asn1_eoc_decode(ctx, eoc)) { - kfree(id); - kfree(*obj); - *obj = NULL; - return 0; - } - return 1; -} - -static unsigned char snmp_request_decode(struct asn1_ctx *ctx, - struct snmp_request *request) -{ - unsigned int cls, con, tag; - unsigned char *end; - - if (!asn1_header_decode(ctx, &end, &cls, &con, &tag)) - return 0; - - if (cls != ASN1_UNI || con != ASN1_PRI || tag != ASN1_INT) - return 0; - - if (!asn1_ulong_decode(ctx, end, &request->id)) - return 0; - - if (!asn1_header_decode(ctx, &end, &cls, &con, &tag)) - return 0; - - if (cls != ASN1_UNI || con != ASN1_PRI || tag != ASN1_INT) - return 0; - - if (!asn1_uint_decode(ctx, end, &request->error_status)) - return 0; - - if (!asn1_header_decode(ctx, &end, &cls, &con, &tag)) - return 0; - - if (cls != ASN1_UNI || con != ASN1_PRI || tag != ASN1_INT) - return 0; - - if (!asn1_uint_decode(ctx, end, &request->error_index)) - return 0; - - return 1; -} - -/* - * Fast checksum update for possibly oddly-aligned UDP byte, from the - * code example in the draft. - */ -static void fast_csum(__sum16 *csum, - const unsigned char *optr, - const unsigned char *nptr, - int offset) -{ - unsigned char s[4]; - - if (offset & 1) { - s[0] = s[2] = 0; - s[1] = ~*optr; - s[3] = *nptr; - } else { - s[1] = s[3] = 0; - s[0] = ~*optr; - s[2] = *nptr; - } - - *csum = csum_fold(csum_partial(s, 4, ~csum_unfold(*csum))); -} - -/* - * Mangle IP address. - * - begin points to the start of the snmp messgae - * - addr points to the start of the address - */ -static inline void mangle_address(unsigned char *begin, - unsigned char *addr, - const struct oct1_map *map, - __sum16 *check) -{ - if (map->from == NOCT1(addr)) { - u_int32_t old; - - if (debug) - memcpy(&old, (unsigned char *)addr, sizeof(old)); - - *addr = map->to; - - /* Update UDP checksum if being used */ - if (*check) { - fast_csum(check, - &map->from, &map->to, addr - begin); - } - - if (debug) - printk(KERN_DEBUG "bsalg: mapped %u.%u.%u.%u to " - "%u.%u.%u.%u\n", NIPQUAD(old), NIPQUAD(*addr)); - } -} - -static unsigned char snmp_trap_decode(struct asn1_ctx *ctx, - struct snmp_v1_trap *trap, - const struct oct1_map *map, - __sum16 *check) -{ - unsigned int cls, con, tag, len; - unsigned char *end; - - if (!asn1_header_decode(ctx, &end, &cls, &con, &tag)) - return 0; - - if (cls != ASN1_UNI || con != ASN1_PRI || tag != ASN1_OJI) - return 0; - - if (!asn1_oid_decode(ctx, end, &trap->id, &trap->id_len)) - return 0; - - if (!asn1_header_decode(ctx, &end, &cls, &con, &tag)) - goto err_id_free; - - if (!((cls == ASN1_APL && con == ASN1_PRI && tag == SNMP_IPA) || - (cls == ASN1_UNI && con == ASN1_PRI && tag == ASN1_OTS))) - goto err_id_free; - - if (!asn1_octets_decode(ctx, end, (unsigned char **)&trap->ip_address, &len)) - goto err_id_free; - - /* IPv4 only */ - if (len != 4) - goto err_addr_free; - - mangle_address(ctx->begin, ctx->pointer - 4, map, check); - - if (!asn1_header_decode(ctx, &end, &cls, &con, &tag)) - goto err_addr_free; - - if (cls != ASN1_UNI || con != ASN1_PRI || tag != ASN1_INT) - goto err_addr_free; - - if (!asn1_uint_decode(ctx, end, &trap->general)) - goto err_addr_free; - - if (!asn1_header_decode(ctx, &end, &cls, &con, &tag)) - goto err_addr_free; - - if (cls != ASN1_UNI || con != ASN1_PRI || tag != ASN1_INT) - goto err_addr_free; - - if (!asn1_uint_decode(ctx, end, &trap->specific)) - goto err_addr_free; - - if (!asn1_header_decode(ctx, &end, &cls, &con, &tag)) - goto err_addr_free; - - if (!((cls == ASN1_APL && con == ASN1_PRI && tag == SNMP_TIT) || - (cls == ASN1_UNI && con == ASN1_PRI && tag == ASN1_INT))) - goto err_addr_free; - - if (!asn1_ulong_decode(ctx, end, &trap->time)) - goto err_addr_free; - - return 1; - -err_addr_free: - kfree((unsigned long *)trap->ip_address); - -err_id_free: - kfree(trap->id); - - return 0; -} - -/***************************************************************************** - * - * Misc. routines - * - *****************************************************************************/ - -static void hex_dump(unsigned char *buf, size_t len) -{ - size_t i; - - for (i = 0; i < len; i++) { - if (i && !(i % 16)) - printk("\n"); - printk("%02x ", *(buf + i)); - } - printk("\n"); -} - -/* - * Parse and mangle SNMP message according to mapping. - * (And this is the fucking 'basic' method). - */ -static int snmp_parse_mangle(unsigned char *msg, - u_int16_t len, - const struct oct1_map *map, - __sum16 *check) -{ - unsigned char *eoc, *end; - unsigned int cls, con, tag, vers, pdutype; - struct asn1_ctx ctx; - struct asn1_octstr comm; - struct snmp_object **obj; - - if (debug > 1) - hex_dump(msg, len); - - asn1_open(&ctx, msg, len); - - /* - * Start of SNMP message. - */ - if (!asn1_header_decode(&ctx, &eoc, &cls, &con, &tag)) - return 0; - if (cls != ASN1_UNI || con != ASN1_CON || tag != ASN1_SEQ) - return 0; - - /* - * Version 1 or 2 handled. - */ - if (!asn1_header_decode(&ctx, &end, &cls, &con, &tag)) - return 0; - if (cls != ASN1_UNI || con != ASN1_PRI || tag != ASN1_INT) - return 0; - if (!asn1_uint_decode (&ctx, end, &vers)) - return 0; - if (debug > 1) - printk(KERN_DEBUG "bsalg: snmp version: %u\n", vers + 1); - if (vers > 1) - return 1; - - /* - * Community. - */ - if (!asn1_header_decode (&ctx, &end, &cls, &con, &tag)) - return 0; - if (cls != ASN1_UNI || con != ASN1_PRI || tag != ASN1_OTS) - return 0; - if (!asn1_octets_decode(&ctx, end, &comm.data, &comm.len)) - return 0; - if (debug > 1) { - unsigned int i; - - printk(KERN_DEBUG "bsalg: community: "); - for (i = 0; i < comm.len; i++) - printk("%c", comm.data[i]); - printk("\n"); - } - kfree(comm.data); - - /* - * PDU type - */ - if (!asn1_header_decode(&ctx, &eoc, &cls, &con, &pdutype)) - return 0; - if (cls != ASN1_CTX || con != ASN1_CON) - return 0; - if (debug > 1) { - unsigned char *pdus[] = { - [SNMP_PDU_GET] = "get", - [SNMP_PDU_NEXT] = "get-next", - [SNMP_PDU_RESPONSE] = "response", - [SNMP_PDU_SET] = "set", - [SNMP_PDU_TRAP1] = "trapv1", - [SNMP_PDU_BULK] = "bulk", - [SNMP_PDU_INFORM] = "inform", - [SNMP_PDU_TRAP2] = "trapv2" - }; - - if (pdutype > SNMP_PDU_TRAP2) - printk(KERN_DEBUG "bsalg: bad pdu type %u\n", pdutype); - else - printk(KERN_DEBUG "bsalg: pdu: %s\n", pdus[pdutype]); - } - if (pdutype != SNMP_PDU_RESPONSE && - pdutype != SNMP_PDU_TRAP1 && pdutype != SNMP_PDU_TRAP2) - return 1; - - /* - * Request header or v1 trap - */ - if (pdutype == SNMP_PDU_TRAP1) { - struct snmp_v1_trap trap; - unsigned char ret = snmp_trap_decode(&ctx, &trap, map, check); - - if (ret) { - kfree(trap.id); - kfree((unsigned long *)trap.ip_address); - } else - return ret; - - } else { - struct snmp_request req; - - if (!snmp_request_decode(&ctx, &req)) - return 0; - - if (debug > 1) - printk(KERN_DEBUG "bsalg: request: id=0x%lx error_status=%u " - "error_index=%u\n", req.id, req.error_status, - req.error_index); - } - - /* - * Loop through objects, look for IP addresses to mangle. - */ - if (!asn1_header_decode(&ctx, &eoc, &cls, &con, &tag)) - return 0; - - if (cls != ASN1_UNI || con != ASN1_CON || tag != ASN1_SEQ) - return 0; - - obj = kmalloc(sizeof(struct snmp_object), GFP_ATOMIC); - if (obj == NULL) { - if (net_ratelimit()) - printk(KERN_WARNING "OOM in bsalg(%d)\n", __LINE__); - return 0; - } - - while (!asn1_eoc_decode(&ctx, eoc)) { - unsigned int i; - - if (!snmp_object_decode(&ctx, obj)) { - if (*obj) { - kfree((*obj)->id); - kfree(*obj); - } - kfree(obj); - return 0; - } - - if (debug > 1) { - printk(KERN_DEBUG "bsalg: object: "); - for (i = 0; i < (*obj)->id_len; i++) { - if (i > 0) - printk("."); - printk("%lu", (*obj)->id[i]); - } - printk(": type=%u\n", (*obj)->type); - - } - - if ((*obj)->type == SNMP_IPADDR) - mangle_address(ctx.begin, ctx.pointer - 4 , map, check); - - kfree((*obj)->id); - kfree(*obj); - } - kfree(obj); - - if (!asn1_eoc_decode(&ctx, eoc)) - return 0; - - return 1; -} - -/***************************************************************************** - * - * NAT routines. - * - *****************************************************************************/ - -/* - * SNMP translation routine. - */ -static int snmp_translate(struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo, - struct sk_buff **pskb) -{ - struct iphdr *iph = (*pskb)->nh.iph; - struct udphdr *udph = (struct udphdr *)((__be32 *)iph + iph->ihl); - u_int16_t udplen = ntohs(udph->len); - u_int16_t paylen = udplen - sizeof(struct udphdr); - int dir = CTINFO2DIR(ctinfo); - struct oct1_map map; - - /* - * Determine mappping for application layer addresses based - * on NAT manipulations for the packet. - */ - if (dir == IP_CT_DIR_ORIGINAL) { - /* SNAT traps */ - map.from = NOCT1(&ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.src.ip); - map.to = NOCT1(&ct->tuplehash[IP_CT_DIR_REPLY].tuple.dst.ip); - } else { - /* DNAT replies */ - map.from = NOCT1(&ct->tuplehash[IP_CT_DIR_REPLY].tuple.src.ip); - map.to = NOCT1(&ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.dst.ip); - } - - if (map.from == map.to) - return NF_ACCEPT; - - if (!snmp_parse_mangle((unsigned char *)udph + sizeof(struct udphdr), - paylen, &map, &udph->check)) { - if (net_ratelimit()) - printk(KERN_WARNING "bsalg: parser failed\n"); - return NF_DROP; - } - return NF_ACCEPT; -} - -/* We don't actually set up expectations, just adjust internal IP - * addresses if this is being NATted */ -static int help(struct sk_buff **pskb, - struct ip_conntrack *ct, - enum ip_conntrack_info ctinfo) -{ - int dir = CTINFO2DIR(ctinfo); - unsigned int ret; - struct iphdr *iph = (*pskb)->nh.iph; - struct udphdr *udph = (struct udphdr *)((u_int32_t *)iph + iph->ihl); - - /* SNMP replies and originating SNMP traps get mangled */ - if (udph->source == htons(SNMP_PORT) && dir != IP_CT_DIR_REPLY) - return NF_ACCEPT; - if (udph->dest == htons(SNMP_TRAP_PORT) && dir != IP_CT_DIR_ORIGINAL) - return NF_ACCEPT; - - /* No NAT? */ - if (!(ct->status & IPS_NAT_MASK)) - return NF_ACCEPT; - - /* - * Make sure the packet length is ok. So far, we were only guaranteed - * to have a valid length IP header plus 8 bytes, which means we have - * enough room for a UDP header. Just verify the UDP length field so we - * can mess around with the payload. - */ - if (ntohs(udph->len) != (*pskb)->len - (iph->ihl << 2)) { - if (net_ratelimit()) - printk(KERN_WARNING "SNMP: dropping malformed packet " - "src=%u.%u.%u.%u dst=%u.%u.%u.%u\n", - NIPQUAD(iph->saddr), NIPQUAD(iph->daddr)); - return NF_DROP; - } - - if (!skb_make_writable(pskb, (*pskb)->len)) - return NF_DROP; - - spin_lock_bh(&snmp_lock); - ret = snmp_translate(ct, ctinfo, pskb); - spin_unlock_bh(&snmp_lock); - return ret; -} - -static struct ip_conntrack_helper snmp_helper = { - .max_expected = 0, - .timeout = 180, - .me = THIS_MODULE, - .help = help, - .name = "snmp", - - .tuple = {.src = {.u = {.udp = {.port = __constant_htons(SNMP_PORT)}}}, - .dst = {.protonum = IPPROTO_UDP}, - }, - .mask = {.src = {.u = {0xFFFF}}, - .dst = {.protonum = 0xFF}, - }, -}; - -static struct ip_conntrack_helper snmp_trap_helper = { - .max_expected = 0, - .timeout = 180, - .me = THIS_MODULE, - .help = help, - .name = "snmp_trap", - - .tuple = {.src = {.u = {.udp = {.port = __constant_htons(SNMP_TRAP_PORT)}}}, - .dst = {.protonum = IPPROTO_UDP}, - }, - .mask = {.src = {.u = {0xFFFF}}, - .dst = {.protonum = 0xFF}, - }, -}; - -/***************************************************************************** - * - * Module stuff. - * - *****************************************************************************/ - -static int __init ip_nat_snmp_basic_init(void) -{ - int ret = 0; - - ret = ip_conntrack_helper_register(&snmp_helper); - if (ret < 0) - return ret; - ret = ip_conntrack_helper_register(&snmp_trap_helper); - if (ret < 0) { - ip_conntrack_helper_unregister(&snmp_helper); - return ret; - } - return ret; -} - -static void __exit ip_nat_snmp_basic_fini(void) -{ - ip_conntrack_helper_unregister(&snmp_helper); - ip_conntrack_helper_unregister(&snmp_trap_helper); -} - -module_init(ip_nat_snmp_basic_init); -module_exit(ip_nat_snmp_basic_fini); - -module_param(debug, int, 0600); diff -puN net/ipv4/netfilter/ip_nat_standalone.c~git-net /dev/null --- a/net/ipv4/netfilter/ip_nat_standalone.c +++ /dev/null @@ -1,388 +0,0 @@ -/* This file contains all the functions required for the standalone - ip_nat module. - - These are not required by the compatibility layer. -*/ - -/* (C) 1999-2001 Paul `Rusty' Russell - * (C) 2002-2004 Netfilter Core Team - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - */ - -/* - * 23 Apr 2001: Harald Welte - * - new API and handling of conntrack/nat helpers - * - now capable of multiple expectations for one master - * */ - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#include -#include -#include -#include -#include -#include -#include - -#if 0 -#define DEBUGP printk -#else -#define DEBUGP(format, args...) -#endif - -#ifdef CONFIG_XFRM -static void nat_decode_session(struct sk_buff *skb, struct flowi *fl) -{ - struct ip_conntrack *ct; - struct ip_conntrack_tuple *t; - enum ip_conntrack_info ctinfo; - enum ip_conntrack_dir dir; - unsigned long statusbit; - - ct = ip_conntrack_get(skb, &ctinfo); - if (ct == NULL) - return; - dir = CTINFO2DIR(ctinfo); - t = &ct->tuplehash[dir].tuple; - - if (dir == IP_CT_DIR_ORIGINAL) - statusbit = IPS_DST_NAT; - else - statusbit = IPS_SRC_NAT; - - if (ct->status & statusbit) { - fl->fl4_dst = t->dst.ip; - if (t->dst.protonum == IPPROTO_TCP || - t->dst.protonum == IPPROTO_UDP) - fl->fl_ip_dport = t->dst.u.tcp.port; - } - - statusbit ^= IPS_NAT_MASK; - - if (ct->status & statusbit) { - fl->fl4_src = t->src.ip; - if (t->dst.protonum == IPPROTO_TCP || - t->dst.protonum == IPPROTO_UDP) - fl->fl_ip_sport = t->src.u.tcp.port; - } -} -#endif - -static unsigned int -ip_nat_fn(unsigned int hooknum, - struct sk_buff **pskb, - const struct net_device *in, - const struct net_device *out, - int (*okfn)(struct sk_buff *)) -{ - struct ip_conntrack *ct; - enum ip_conntrack_info ctinfo; - struct ip_nat_info *info; - /* maniptype == SRC for postrouting. */ - enum ip_nat_manip_type maniptype = HOOK2MANIP(hooknum); - - /* We never see fragments: conntrack defrags on pre-routing - and local-out, and ip_nat_out protects post-routing. */ - IP_NF_ASSERT(!((*pskb)->nh.iph->frag_off - & htons(IP_MF|IP_OFFSET))); - - ct = ip_conntrack_get(*pskb, &ctinfo); - /* Can't track? It's not due to stress, or conntrack would - have dropped it. Hence it's the user's responsibilty to - packet filter it out, or implement conntrack/NAT for that - protocol. 8) --RR */ - if (!ct) { - /* Exception: ICMP redirect to new connection (not in - hash table yet). We must not let this through, in - case we're doing NAT to the same network. */ - if ((*pskb)->nh.iph->protocol == IPPROTO_ICMP) { - struct icmphdr _hdr, *hp; - - hp = skb_header_pointer(*pskb, - (*pskb)->nh.iph->ihl*4, - sizeof(_hdr), &_hdr); - if (hp != NULL && - hp->type == ICMP_REDIRECT) - return NF_DROP; - } - return NF_ACCEPT; - } - - /* Don't try to NAT if this packet is not conntracked */ - if (ct == &ip_conntrack_untracked) - return NF_ACCEPT; - - switch (ctinfo) { - case IP_CT_RELATED: - case IP_CT_RELATED+IP_CT_IS_REPLY: - if ((*pskb)->nh.iph->protocol == IPPROTO_ICMP) { - if (!ip_nat_icmp_reply_translation(ct, ctinfo, - hooknum, pskb)) - return NF_DROP; - else - return NF_ACCEPT; - } - /* Fall thru... (Only ICMPs can be IP_CT_IS_REPLY) */ - case IP_CT_NEW: - info = &ct->nat.info; - - /* Seen it before? This can happen for loopback, retrans, - or local packets.. */ - if (!ip_nat_initialized(ct, maniptype)) { - unsigned int ret; - - if (unlikely(is_confirmed(ct))) - /* NAT module was loaded late */ - ret = alloc_null_binding_confirmed(ct, info, - hooknum); - else if (hooknum == NF_IP_LOCAL_IN) - /* LOCAL_IN hook doesn't have a chain! */ - ret = alloc_null_binding(ct, info, hooknum); - else - ret = ip_nat_rule_find(pskb, hooknum, - in, out, ct, - info); - - if (ret != NF_ACCEPT) { - return ret; - } - } else - DEBUGP("Already setup manip %s for ct %p\n", - maniptype == IP_NAT_MANIP_SRC ? "SRC" : "DST", - ct); - break; - - default: - /* ESTABLISHED */ - IP_NF_ASSERT(ctinfo == IP_CT_ESTABLISHED - || ctinfo == (IP_CT_ESTABLISHED+IP_CT_IS_REPLY)); - info = &ct->nat.info; - } - - IP_NF_ASSERT(info); - return ip_nat_packet(ct, ctinfo, hooknum, pskb); -} - -static unsigned int -ip_nat_in(unsigned int hooknum, - struct sk_buff **pskb, - const struct net_device *in, - const struct net_device *out, - int (*okfn)(struct sk_buff *)) -{ - unsigned int ret; - __be32 daddr = (*pskb)->nh.iph->daddr; - - ret = ip_nat_fn(hooknum, pskb, in, out, okfn); - if (ret != NF_DROP && ret != NF_STOLEN - && daddr != (*pskb)->nh.iph->daddr) { - dst_release((*pskb)->dst); - (*pskb)->dst = NULL; - } - return ret; -} - -static unsigned int -ip_nat_out(unsigned int hooknum, - struct sk_buff **pskb, - const struct net_device *in, - const struct net_device *out, - int (*okfn)(struct sk_buff *)) -{ -#ifdef CONFIG_XFRM - struct ip_conntrack *ct; - enum ip_conntrack_info ctinfo; -#endif - unsigned int ret; - - /* root is playing with raw sockets. */ - if ((*pskb)->len < sizeof(struct iphdr) - || (*pskb)->nh.iph->ihl * 4 < sizeof(struct iphdr)) - return NF_ACCEPT; - - ret = ip_nat_fn(hooknum, pskb, in, out, okfn); -#ifdef CONFIG_XFRM - if (ret != NF_DROP && ret != NF_STOLEN - && (ct = ip_conntrack_get(*pskb, &ctinfo)) != NULL) { - enum ip_conntrack_dir dir = CTINFO2DIR(ctinfo); - - if (ct->tuplehash[dir].tuple.src.ip != - ct->tuplehash[!dir].tuple.dst.ip - || ct->tuplehash[dir].tuple.src.u.all != - ct->tuplehash[!dir].tuple.dst.u.all - ) - return ip_xfrm_me_harder(pskb) == 0 ? ret : NF_DROP; - } -#endif - return ret; -} - -static unsigned int -ip_nat_local_fn(unsigned int hooknum, - struct sk_buff **pskb, - const struct net_device *in, - const struct net_device *out, - int (*okfn)(struct sk_buff *)) -{ - struct ip_conntrack *ct; - enum ip_conntrack_info ctinfo; - unsigned int ret; - - /* root is playing with raw sockets. */ - if ((*pskb)->len < sizeof(struct iphdr) - || (*pskb)->nh.iph->ihl * 4 < sizeof(struct iphdr)) - return NF_ACCEPT; - - ret = ip_nat_fn(hooknum, pskb, in, out, okfn); - if (ret != NF_DROP && ret != NF_STOLEN - && (ct = ip_conntrack_get(*pskb, &ctinfo)) != NULL) { - enum ip_conntrack_dir dir = CTINFO2DIR(ctinfo); - - if (ct->tuplehash[dir].tuple.dst.ip != - ct->tuplehash[!dir].tuple.src.ip) { - if (ip_route_me_harder(pskb, RTN_UNSPEC)) - ret = NF_DROP; - } -#ifdef CONFIG_XFRM - else if (ct->tuplehash[dir].tuple.dst.u.all != - ct->tuplehash[!dir].tuple.src.u.all) - if (ip_xfrm_me_harder(pskb)) - ret = NF_DROP; -#endif - - } - return ret; -} - -static unsigned int -ip_nat_adjust(unsigned int hooknum, - struct sk_buff **pskb, - const struct net_device *in, - const struct net_device *out, - int (*okfn)(struct sk_buff *)) -{ - struct ip_conntrack *ct; - enum ip_conntrack_info ctinfo; - - ct = ip_conntrack_get(*pskb, &ctinfo); - if (ct && test_bit(IPS_SEQ_ADJUST_BIT, &ct->status)) { - DEBUGP("ip_nat_standalone: adjusting sequence number\n"); - if (!ip_nat_seq_adjust(pskb, ct, ctinfo)) - return NF_DROP; - } - return NF_ACCEPT; -} - -/* We must be after connection tracking and before packet filtering. */ - -static struct nf_hook_ops ip_nat_ops[] = { - /* Before packet filtering, change destination */ - { - .hook = ip_nat_in, - .owner = THIS_MODULE, - .pf = PF_INET, - .hooknum = NF_IP_PRE_ROUTING, - .priority = NF_IP_PRI_NAT_DST, - }, - /* After packet filtering, change source */ - { - .hook = ip_nat_out, - .owner = THIS_MODULE, - .pf = PF_INET, - .hooknum = NF_IP_POST_ROUTING, - .priority = NF_IP_PRI_NAT_SRC, - }, - /* After conntrack, adjust sequence number */ - { - .hook = ip_nat_adjust, - .owner = THIS_MODULE, - .pf = PF_INET, - .hooknum = NF_IP_POST_ROUTING, - .priority = NF_IP_PRI_NAT_SEQ_ADJUST, - }, - /* Before packet filtering, change destination */ - { - .hook = ip_nat_local_fn, - .owner = THIS_MODULE, - .pf = PF_INET, - .hooknum = NF_IP_LOCAL_OUT, - .priority = NF_IP_PRI_NAT_DST, - }, - /* After packet filtering, change source */ - { - .hook = ip_nat_fn, - .owner = THIS_MODULE, - .pf = PF_INET, - .hooknum = NF_IP_LOCAL_IN, - .priority = NF_IP_PRI_NAT_SRC, - }, - /* After conntrack, adjust sequence number */ - { - .hook = ip_nat_adjust, - .owner = THIS_MODULE, - .pf = PF_INET, - .hooknum = NF_IP_LOCAL_IN, - .priority = NF_IP_PRI_NAT_SEQ_ADJUST, - }, -}; - -static int __init ip_nat_standalone_init(void) -{ - int ret = 0; - - need_conntrack(); - -#ifdef CONFIG_XFRM - BUG_ON(ip_nat_decode_session != NULL); - ip_nat_decode_session = nat_decode_session; -#endif - ret = ip_nat_rule_init(); - if (ret < 0) { - printk("ip_nat_init: can't setup rules.\n"); - goto cleanup_decode_session; - } - ret = nf_register_hooks(ip_nat_ops, ARRAY_SIZE(ip_nat_ops)); - if (ret < 0) { - printk("ip_nat_init: can't register hooks.\n"); - goto cleanup_rule_init; - } - return ret; - - cleanup_rule_init: - ip_nat_rule_cleanup(); - cleanup_decode_session: -#ifdef CONFIG_XFRM - ip_nat_decode_session = NULL; - synchronize_net(); -#endif - return ret; -} - -static void __exit ip_nat_standalone_fini(void) -{ - nf_unregister_hooks(ip_nat_ops, ARRAY_SIZE(ip_nat_ops)); - ip_nat_rule_cleanup(); -#ifdef CONFIG_XFRM - ip_nat_decode_session = NULL; - synchronize_net(); -#endif -} - -module_init(ip_nat_standalone_init); -module_exit(ip_nat_standalone_fini); - -MODULE_LICENSE("GPL"); diff -puN net/ipv4/netfilter/ip_nat_tftp.c~git-net /dev/null --- a/net/ipv4/netfilter/ip_nat_tftp.c +++ /dev/null @@ -1,70 +0,0 @@ -/* (C) 2001-2002 Magnus Boden - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - * - * Version: 0.0.7 - * - * Thu 21 Mar 2002 Harald Welte - * - Port to newnat API - * - * This module currently supports DNAT: - * iptables -t nat -A PREROUTING -d x.x.x.x -j DNAT --to-dest x.x.x.y - * - * and SNAT: - * iptables -t nat -A POSTROUTING { -j MASQUERADE , -j SNAT --to-source x.x.x.x } - * - * It has not been tested with - * -j SNAT --to-source x.x.x.x-x.x.x.y since I only have one external ip - * If you do test this please let me know if it works or not. - * - */ - -#include -#include -#include -#include - -#include -#include -#include -#include -#include -#include -#include - -MODULE_AUTHOR("Magnus Boden "); -MODULE_DESCRIPTION("tftp NAT helper"); -MODULE_LICENSE("GPL"); - -static unsigned int help(struct sk_buff **pskb, - enum ip_conntrack_info ctinfo, - struct ip_conntrack_expect *exp) -{ - struct ip_conntrack *ct = exp->master; - - exp->saved_proto.udp.port - = ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.src.u.udp.port; - exp->dir = IP_CT_DIR_REPLY; - exp->expectfn = ip_nat_follow_master; - if (ip_conntrack_expect_related(exp) != 0) - return NF_DROP; - return NF_ACCEPT; -} - -static void __exit ip_nat_tftp_fini(void) -{ - rcu_assign_pointer(ip_nat_tftp_hook, NULL); - synchronize_rcu(); -} - -static int __init ip_nat_tftp_init(void) -{ - BUG_ON(rcu_dereference(ip_nat_tftp_hook)); - rcu_assign_pointer(ip_nat_tftp_hook, help); - return 0; -} - -module_init(ip_nat_tftp_init); -module_exit(ip_nat_tftp_fini); diff -puN net/ipv4/netfilter/ip_queue.c~git-net net/ipv4/netfilter/ip_queue.c --- a/net/ipv4/netfilter/ip_queue.c~git-net +++ a/net/ipv4/netfilter/ip_queue.c @@ -8,18 +8,6 @@ * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as * published by the Free Software Foundation. - * - * 2000-03-27: Simplified code (thanks to Andi Kleen for clues). - * 2000-05-20: Fixed notifier problems (following Miguel Freitas' report). - * 2000-06-19: Fixed so nfmark is copied to metadata (reported by Sebastian - * Zander). - * 2000-08-01: Added Nick Williams' MAC support. - * 2002-06-25: Code cleanup. - * 2005-01-10: Added /proc counter for dropped packets; fixed so - * packets aren't delivered to user space if they're going - * to be dropped. - * 2005-05-26: local_bh_{disable,enable} around nf_reinject (Harald Welte) - * */ #include #include @@ -191,12 +179,13 @@ ipq_flush(int verdict) static struct sk_buff * ipq_build_packet_message(struct ipq_queue_entry *entry, int *errp) { - unsigned char *old_tail; + sk_buff_data_t old_tail; size_t size = 0; size_t data_len = 0; struct sk_buff *skb; struct ipq_packet_msg *pmsg; struct nlmsghdr *nlh; + struct timeval tv; read_lock_bh(&queue_lock); @@ -234,15 +223,16 @@ ipq_build_packet_message(struct ipq_queu if (!skb) goto nlmsg_failure; - old_tail= skb->tail; + old_tail = skb->tail; nlh = NLMSG_PUT(skb, 0, 0, IPQM_PACKET, size - sizeof(*nlh)); pmsg = NLMSG_DATA(nlh); memset(pmsg, 0, sizeof(*pmsg)); pmsg->packet_id = (unsigned long )entry; pmsg->data_len = data_len; - pmsg->timestamp_sec = entry->skb->tstamp.off_sec; - pmsg->timestamp_usec = entry->skb->tstamp.off_usec; + tv = ktime_to_timeval(entry->skb->tstamp); + pmsg->timestamp_sec = tv.tv_sec; + pmsg->timestamp_usec = tv.tv_usec; pmsg->mark = entry->skb->mark; pmsg->hook = entry->info->hook; pmsg->hw_protocol = entry->skb->protocol; @@ -378,7 +368,7 @@ ipq_mangle_ipv4(ipq_verdict_msg_t *v, st } if (!skb_make_writable(&e->skb, v->data_len)) return -ENOMEM; - memcpy(e->skb->data, v->payload, v->data_len); + skb_copy_to_linear_data(e->skb, v->payload, v->data_len); e->skb->ip_summed = CHECKSUM_NONE; return 0; @@ -495,7 +485,7 @@ ipq_rcv_skb(struct sk_buff *skb) if (skblen < sizeof(*nlh)) return; - nlh = (struct nlmsghdr *)skb->data; + nlh = nlmsg_hdr(skb); nlmsglen = nlh->nlmsg_len; if (nlmsglen < sizeof(*nlh) || skblen < nlmsglen) return; @@ -678,7 +668,7 @@ static int __init ip_queue_init(void) netlink_register_notifier(&ipq_nl_notifier); ipqnl = netlink_kernel_create(NETLINK_FIREWALL, 0, ipq_rcv_sk, - THIS_MODULE); + NULL, THIS_MODULE); if (ipqnl == NULL) { printk(KERN_ERR "ip_queue: failed to create netlink socket\n"); goto cleanup_netlink_notifier; diff -puN net/ipv4/netfilter/ip_tables.c~git-net net/ipv4/netfilter/ip_tables.c --- a/net/ipv4/netfilter/ip_tables.c~git-net +++ a/net/ipv4/netfilter/ip_tables.c @@ -7,12 +7,6 @@ * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as * published by the Free Software Foundation. - * - * 19 Jan 2002 Harald Welte - * - increase module usage count as soon as we have rules inside - * a table - * 08 Oct 2005 Harald Welte - * - Generalize into "x_tables" layer and "{ip,ip6,arp}_tables" */ #include #include @@ -198,7 +192,7 @@ int do_match(struct ipt_entry_match *m, { /* Stop iteration if it doesn't match */ if (!m->u.kernel.match->match(skb, in, out, m->u.kernel.match, m->data, - offset, skb->nh.iph->ihl*4, hotdrop)) + offset, ip_hdrlen(skb), hotdrop)) return 1; else return 0; @@ -231,7 +225,7 @@ ipt_do_table(struct sk_buff **pskb, struct xt_table_info *private; /* Initialization */ - ip = (*pskb)->nh.iph; + ip = ip_hdr(*pskb); datalen = (*pskb)->len - ip->ihl * 4; indev = in ? in->name : nulldevname; outdev = out ? out->name : nulldevname; @@ -320,7 +314,7 @@ ipt_do_table(struct sk_buff **pskb, = 0x57acc001; #endif /* Target might have changed stuff. */ - ip = (*pskb)->nh.iph; + ip = ip_hdr(*pskb); datalen = (*pskb)->len - ip->ihl * 4; if (verdict == IPT_CONTINUE) diff -puN net/ipv4/netfilter/ipt_CLUSTERIP.c~git-net net/ipv4/netfilter/ipt_CLUSTERIP.c --- a/net/ipv4/netfilter/ipt_CLUSTERIP.c~git-net +++ a/net/ipv4/netfilter/ipt_CLUSTERIP.c @@ -21,15 +21,12 @@ #include #include #include - -#include - #include - #include #include #include -#include +#include +#include #define CLUSTERIP_VERSION "0.8" @@ -240,7 +237,7 @@ clusterip_del_node(struct clusterip_conf static inline u_int32_t clusterip_hashfn(struct sk_buff *skb, struct clusterip_config *config) { - struct iphdr *iph = skb->nh.iph; + struct iphdr *iph = ip_hdr(skb); unsigned long hashval; u_int16_t sport, dport; u_int16_t *ports; @@ -310,15 +307,16 @@ target(struct sk_buff **pskb, const void *targinfo) { const struct ipt_clusterip_tgt_info *cipinfo = targinfo; + struct nf_conn *ct; enum ip_conntrack_info ctinfo; - u_int32_t *mark, hash; + u_int32_t hash; /* don't need to clusterip_config_get() here, since refcount * is only decremented by destroy() - and ip_tables guarantees * that the ->target() function isn't called after ->destroy() */ - mark = nf_ct_get_mark((*pskb), &ctinfo); - if (mark == NULL) { + ct = nf_ct_get(*pskb, &ctinfo); + if (ct == NULL) { printk(KERN_ERR "CLUSTERIP: no conntrack!\n"); /* FIXME: need to drop invalid ones, since replies * to outgoing connections of other nodes will be @@ -328,7 +326,7 @@ target(struct sk_buff **pskb, /* special case: ICMP error handling. conntrack distinguishes between * error messages (RELATED) and information requests (see below) */ - if ((*pskb)->nh.iph->protocol == IPPROTO_ICMP + if (ip_hdr(*pskb)->protocol == IPPROTO_ICMP && (ctinfo == IP_CT_RELATED || ctinfo == IP_CT_RELATED+IP_CT_IS_REPLY)) return XT_CONTINUE; @@ -341,7 +339,7 @@ target(struct sk_buff **pskb, switch (ctinfo) { case IP_CT_NEW: - *mark = hash; + ct->mark = hash; break; case IP_CT_RELATED: case IP_CT_RELATED+IP_CT_IS_REPLY: @@ -358,7 +356,7 @@ target(struct sk_buff **pskb, #ifdef DEBUG_CLUSTERP DUMP_TUPLE(&ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple); #endif - DEBUGP("hash=%u ct_hash=%u ", hash, *mark); + DEBUGP("hash=%u ct_hash=%u ", hash, ct->mark); if (!clusterip_responsible(cipinfo->config, hash)) { DEBUGP("not responsible\n"); return NF_DROP; @@ -521,7 +519,7 @@ arp_mangle(unsigned int hook, const struct net_device *out, int (*okfn)(struct sk_buff *)) { - struct arphdr *arp = (*pskb)->nh.arph; + struct arphdr *arp = arp_hdr(*pskb); struct arp_payload *payload; struct clusterip_config *c; diff -puN net/ipv4/netfilter/ipt_ECN.c~git-net net/ipv4/netfilter/ipt_ECN.c --- a/net/ipv4/netfilter/ipt_ECN.c~git-net +++ a/net/ipv4/netfilter/ipt_ECN.c @@ -5,14 +5,13 @@ * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as * published by the Free Software Foundation. - * - * ipt_ECN.c,v 1.5 2002/08/18 19:36:51 laforge Exp */ #include #include #include #include +#include #include #include @@ -29,13 +28,13 @@ MODULE_DESCRIPTION("iptables ECN modific static inline int set_ect_ip(struct sk_buff **pskb, const struct ipt_ECN_info *einfo) { - struct iphdr *iph = (*pskb)->nh.iph; + struct iphdr *iph = ip_hdr(*pskb); if ((iph->tos & IPT_ECN_IP_MASK) != (einfo->ip_ect & IPT_ECN_IP_MASK)) { __u8 oldtos; if (!skb_make_writable(pskb, sizeof(struct iphdr))) return 0; - iph = (*pskb)->nh.iph; + iph = ip_hdr(*pskb); oldtos = iph->tos; iph->tos &= ~IPT_ECN_IP_MASK; iph->tos |= (einfo->ip_ect & IPT_ECN_IP_MASK); @@ -52,7 +51,7 @@ set_ect_tcp(struct sk_buff **pskb, const __be16 oldval; /* Not enought header? */ - tcph = skb_header_pointer(*pskb, (*pskb)->nh.iph->ihl*4, + tcph = skb_header_pointer(*pskb, ip_hdrlen(*pskb), sizeof(_tcph), &_tcph); if (!tcph) return 0; @@ -63,9 +62,9 @@ set_ect_tcp(struct sk_buff **pskb, const tcph->cwr == einfo->proto.tcp.cwr))) return 1; - if (!skb_make_writable(pskb, (*pskb)->nh.iph->ihl*4+sizeof(*tcph))) + if (!skb_make_writable(pskb, ip_hdrlen(*pskb) + sizeof(*tcph))) return 0; - tcph = (void *)(*pskb)->nh.iph + (*pskb)->nh.iph->ihl*4; + tcph = (void *)ip_hdr(*pskb) + ip_hdrlen(*pskb); oldval = ((__be16 *)tcph)[6]; if (einfo->operation & IPT_ECN_OP_SET_ECE) @@ -93,7 +92,7 @@ target(struct sk_buff **pskb, return NF_DROP; if (einfo->operation & (IPT_ECN_OP_SET_ECE | IPT_ECN_OP_SET_CWR) - && (*pskb)->nh.iph->protocol == IPPROTO_TCP) + && ip_hdr(*pskb)->protocol == IPPROTO_TCP) if (!set_ect_tcp(pskb, einfo)) return NF_DROP; diff -puN net/ipv4/netfilter/ipt_LOG.c~git-net net/ipv4/netfilter/ipt_LOG.c --- a/net/ipv4/netfilter/ipt_LOG.c~git-net +++ a/net/ipv4/netfilter/ipt_LOG.c @@ -399,9 +399,9 @@ ipt_log_packet(unsigned int pf, /* MAC logging for input chain only. */ printk("MAC="); if (skb->dev && skb->dev->hard_header_len - && skb->mac.raw != (void*)skb->nh.iph) { + && skb->mac_header != skb->network_header) { int i; - unsigned char *p = skb->mac.raw; + const unsigned char *p = skb_mac_header(skb); for (i = 0; i < skb->dev->hard_header_len; i++,p++) printk("%02x%c", *p, i==skb->dev->hard_header_len - 1 @@ -477,14 +477,10 @@ static int __init ipt_log_init(void) ret = xt_register_target(&ipt_log_reg); if (ret < 0) return ret; - if (nf_log_register(PF_INET, &ipt_log_logger) < 0) { - printk(KERN_WARNING "ipt_LOG: not logging via system console " - "since somebody else already registered for PF_INET\n"); - /* we cannot make module load fail here, since otherwise - * iptables userspace would abort */ - } - - return 0; + ret = nf_log_register(PF_INET, &ipt_log_logger); + if (ret < 0 && ret != -EEXIST) + xt_unregister_target(&ipt_log_reg); + return ret; } static void __exit ipt_log_fini(void) diff -puN net/ipv4/netfilter/ipt_MASQUERADE.c~git-net net/ipv4/netfilter/ipt_MASQUERADE.c --- a/net/ipv4/netfilter/ipt_MASQUERADE.c~git-net +++ a/net/ipv4/netfilter/ipt_MASQUERADE.c @@ -19,12 +19,8 @@ #include #include #include -#include -#ifdef CONFIG_NF_NAT_NEEDED #include -#else -#include -#endif +#include #include MODULE_LICENSE("GPL"); @@ -48,7 +44,7 @@ masquerade_check(const char *tablename, void *targinfo, unsigned int hook_mask) { - const struct ip_nat_multi_range_compat *mr = targinfo; + const struct nf_nat_multi_range_compat *mr = targinfo; if (mr->range[0].flags & IP_NAT_RANGE_MAP_IPS) { DEBUGP("masquerade_check: bad MAP_IPS.\n"); @@ -69,33 +65,26 @@ masquerade_target(struct sk_buff **pskb, const struct xt_target *target, const void *targinfo) { -#ifdef CONFIG_NF_NAT_NEEDED + struct nf_conn *ct; struct nf_conn_nat *nat; -#endif - struct ip_conntrack *ct; enum ip_conntrack_info ctinfo; - struct ip_nat_range newrange; - const struct ip_nat_multi_range_compat *mr; + struct nf_nat_range newrange; + const struct nf_nat_multi_range_compat *mr; struct rtable *rt; __be32 newsrc; - IP_NF_ASSERT(hooknum == NF_IP_POST_ROUTING); + NF_CT_ASSERT(hooknum == NF_IP_POST_ROUTING); - ct = ip_conntrack_get(*pskb, &ctinfo); -#ifdef CONFIG_NF_NAT_NEEDED + ct = nf_ct_get(*pskb, &ctinfo); nat = nfct_nat(ct); -#endif - IP_NF_ASSERT(ct && (ctinfo == IP_CT_NEW || ctinfo == IP_CT_RELATED + + NF_CT_ASSERT(ct && (ctinfo == IP_CT_NEW || ctinfo == IP_CT_RELATED || ctinfo == IP_CT_RELATED + IP_CT_IS_REPLY)); /* Source address is 0.0.0.0 - locally generated packet that is * probably not supposed to be masqueraded. */ -#ifdef CONFIG_NF_NAT_NEEDED if (ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.src.u3.ip == 0) -#else - if (ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.src.ip == 0) -#endif return NF_ACCEPT; mr = targinfo; @@ -107,40 +96,30 @@ masquerade_target(struct sk_buff **pskb, } write_lock_bh(&masq_lock); -#ifdef CONFIG_NF_NAT_NEEDED nat->masq_index = out->ifindex; -#else - ct->nat.masq_index = out->ifindex; -#endif write_unlock_bh(&masq_lock); /* Transfer from original range. */ - newrange = ((struct ip_nat_range) + newrange = ((struct nf_nat_range) { mr->range[0].flags | IP_NAT_RANGE_MAP_IPS, newsrc, newsrc, mr->range[0].min, mr->range[0].max }); /* Hand modified range to generic setup. */ - return ip_nat_setup_info(ct, &newrange, hooknum); + return nf_nat_setup_info(ct, &newrange, hooknum); } static inline int -device_cmp(struct ip_conntrack *i, void *ifindex) +device_cmp(struct nf_conn *i, void *ifindex) { - int ret; -#ifdef CONFIG_NF_NAT_NEEDED struct nf_conn_nat *nat = nfct_nat(i); + int ret; if (!nat) return 0; -#endif read_lock_bh(&masq_lock); -#ifdef CONFIG_NF_NAT_NEEDED ret = (nat->masq_index == (int)(long)ifindex); -#else - ret = (i->nat.masq_index == (int)(long)ifindex); -#endif read_unlock_bh(&masq_lock); return ret; @@ -156,9 +135,9 @@ static int masq_device_event(struct noti /* Device was downed. Search entire table for conntracks which were associated with that device, and forget them. */ - IP_NF_ASSERT(dev->ifindex != 0); + NF_CT_ASSERT(dev->ifindex != 0); - ip_ct_iterate_cleanup(device_cmp, (void *)(long)dev->ifindex); + nf_ct_iterate_cleanup(device_cmp, (void *)(long)dev->ifindex); } return NOTIFY_DONE; @@ -174,9 +153,9 @@ static int masq_inet_event(struct notifi /* IP address was deleted. Search entire table for conntracks which were associated with that device, and forget them. */ - IP_NF_ASSERT(dev->ifindex != 0); + NF_CT_ASSERT(dev->ifindex != 0); - ip_ct_iterate_cleanup(device_cmp, (void *)(long)dev->ifindex); + nf_ct_iterate_cleanup(device_cmp, (void *)(long)dev->ifindex); } return NOTIFY_DONE; @@ -194,7 +173,7 @@ static struct xt_target masquerade = { .name = "MASQUERADE", .family = AF_INET, .target = masquerade_target, - .targetsize = sizeof(struct ip_nat_multi_range_compat), + .targetsize = sizeof(struct nf_nat_multi_range_compat), .table = "nat", .hooks = 1 << NF_IP_POST_ROUTING, .checkentry = masquerade_check, diff -puN net/ipv4/netfilter/ipt_NETMAP.c~git-net net/ipv4/netfilter/ipt_NETMAP.c --- a/net/ipv4/netfilter/ipt_NETMAP.c~git-net +++ a/net/ipv4/netfilter/ipt_NETMAP.c @@ -16,11 +16,7 @@ #include #include #include -#ifdef CONFIG_NF_NAT_NEEDED #include -#else -#include -#endif #define MODULENAME "NETMAP" MODULE_LICENSE("GPL"); @@ -40,7 +36,7 @@ check(const char *tablename, void *targinfo, unsigned int hook_mask) { - const struct ip_nat_multi_range_compat *mr = targinfo; + const struct nf_nat_multi_range_compat *mr = targinfo; if (!(mr->range[0].flags & IP_NAT_RANGE_MAP_IPS)) { DEBUGP(MODULENAME":check: bad MAP_IPS.\n"); @@ -61,39 +57,39 @@ target(struct sk_buff **pskb, const struct xt_target *target, const void *targinfo) { - struct ip_conntrack *ct; + struct nf_conn *ct; enum ip_conntrack_info ctinfo; __be32 new_ip, netmask; - const struct ip_nat_multi_range_compat *mr = targinfo; - struct ip_nat_range newrange; + const struct nf_nat_multi_range_compat *mr = targinfo; + struct nf_nat_range newrange; - IP_NF_ASSERT(hooknum == NF_IP_PRE_ROUTING + NF_CT_ASSERT(hooknum == NF_IP_PRE_ROUTING || hooknum == NF_IP_POST_ROUTING || hooknum == NF_IP_LOCAL_OUT); - ct = ip_conntrack_get(*pskb, &ctinfo); + ct = nf_ct_get(*pskb, &ctinfo); netmask = ~(mr->range[0].min_ip ^ mr->range[0].max_ip); if (hooknum == NF_IP_PRE_ROUTING || hooknum == NF_IP_LOCAL_OUT) - new_ip = (*pskb)->nh.iph->daddr & ~netmask; + new_ip = ip_hdr(*pskb)->daddr & ~netmask; else - new_ip = (*pskb)->nh.iph->saddr & ~netmask; + new_ip = ip_hdr(*pskb)->saddr & ~netmask; new_ip |= mr->range[0].min_ip & netmask; - newrange = ((struct ip_nat_range) + newrange = ((struct nf_nat_range) { mr->range[0].flags | IP_NAT_RANGE_MAP_IPS, new_ip, new_ip, mr->range[0].min, mr->range[0].max }); /* Hand modified range to generic setup. */ - return ip_nat_setup_info(ct, &newrange, hooknum); + return nf_nat_setup_info(ct, &newrange, hooknum); } static struct xt_target target_module = { .name = MODULENAME, .family = AF_INET, .target = target, - .targetsize = sizeof(struct ip_nat_multi_range_compat), + .targetsize = sizeof(struct nf_nat_multi_range_compat), .table = "nat", .hooks = (1 << NF_IP_PRE_ROUTING) | (1 << NF_IP_POST_ROUTING) | (1 << NF_IP_LOCAL_OUT), diff -puN net/ipv4/netfilter/ipt_REDIRECT.c~git-net net/ipv4/netfilter/ipt_REDIRECT.c --- a/net/ipv4/netfilter/ipt_REDIRECT.c~git-net +++ a/net/ipv4/netfilter/ipt_REDIRECT.c @@ -19,11 +19,7 @@ #include #include #include -#ifdef CONFIG_NF_NAT_NEEDED #include -#else -#include -#endif MODULE_LICENSE("GPL"); MODULE_AUTHOR("Netfilter Core Team "); @@ -43,7 +39,7 @@ redirect_check(const char *tablename, void *targinfo, unsigned int hook_mask) { - const struct ip_nat_multi_range_compat *mr = targinfo; + const struct nf_nat_multi_range_compat *mr = targinfo; if (mr->range[0].flags & IP_NAT_RANGE_MAP_IPS) { DEBUGP("redirect_check: bad MAP_IPS.\n"); @@ -64,17 +60,17 @@ redirect_target(struct sk_buff **pskb, const struct xt_target *target, const void *targinfo) { - struct ip_conntrack *ct; + struct nf_conn *ct; enum ip_conntrack_info ctinfo; __be32 newdst; - const struct ip_nat_multi_range_compat *mr = targinfo; - struct ip_nat_range newrange; + const struct nf_nat_multi_range_compat *mr = targinfo; + struct nf_nat_range newrange; - IP_NF_ASSERT(hooknum == NF_IP_PRE_ROUTING + NF_CT_ASSERT(hooknum == NF_IP_PRE_ROUTING || hooknum == NF_IP_LOCAL_OUT); - ct = ip_conntrack_get(*pskb, &ctinfo); - IP_NF_ASSERT(ct && (ctinfo == IP_CT_NEW || ctinfo == IP_CT_RELATED)); + ct = nf_ct_get(*pskb, &ctinfo); + NF_CT_ASSERT(ct && (ctinfo == IP_CT_NEW || ctinfo == IP_CT_RELATED)); /* Local packets: make them go to loopback */ if (hooknum == NF_IP_LOCAL_OUT) @@ -96,20 +92,20 @@ redirect_target(struct sk_buff **pskb, } /* Transfer from original range. */ - newrange = ((struct ip_nat_range) + newrange = ((struct nf_nat_range) { mr->range[0].flags | IP_NAT_RANGE_MAP_IPS, newdst, newdst, mr->range[0].min, mr->range[0].max }); /* Hand modified range to generic setup. */ - return ip_nat_setup_info(ct, &newrange, hooknum); + return nf_nat_setup_info(ct, &newrange, hooknum); } static struct xt_target redirect_reg = { .name = "REDIRECT", .family = AF_INET, .target = redirect_target, - .targetsize = sizeof(struct ip_nat_multi_range_compat), + .targetsize = sizeof(struct nf_nat_multi_range_compat), .table = "nat", .hooks = (1 << NF_IP_PRE_ROUTING) | (1 << NF_IP_LOCAL_OUT), .checkentry = redirect_check, diff -puN net/ipv4/netfilter/ipt_REJECT.c~git-net net/ipv4/netfilter/ipt_REJECT.c --- a/net/ipv4/netfilter/ipt_REJECT.c~git-net +++ a/net/ipv4/netfilter/ipt_REJECT.c @@ -1,7 +1,5 @@ /* * This is a module which is used for rejecting packets. - * Added support for customized reject packets (Jozsef Kadlecsik). - * Added support for ICMP type-3-code-13 (Maciej Soltysiak). [RFC 1812] */ /* (C) 1999-2001 Paul `Rusty' Russell @@ -43,7 +41,7 @@ MODULE_DESCRIPTION("iptables REJECT targ static void send_reset(struct sk_buff *oldskb, int hook) { struct sk_buff *nskb; - struct iphdr *iph = oldskb->nh.iph; + struct iphdr *niph; struct tcphdr _otcph, *oth, *tcph; __be16 tmp_port; __be32 tmp_addr; @@ -51,10 +49,10 @@ static void send_reset(struct sk_buff *o unsigned int addr_type; /* IP header checks: fragment. */ - if (oldskb->nh.iph->frag_off & htons(IP_OFFSET)) + if (ip_hdr(oldskb)->frag_off & htons(IP_OFFSET)) return; - oth = skb_header_pointer(oldskb, oldskb->nh.iph->ihl * 4, + oth = skb_header_pointer(oldskb, ip_hdrlen(oldskb), sizeof(_otcph), &_otcph); if (oth == NULL) return; @@ -64,7 +62,7 @@ static void send_reset(struct sk_buff *o return; /* Check checksum */ - if (nf_ip_checksum(oldskb, hook, iph->ihl * 4, IPPROTO_TCP)) + if (nf_ip_checksum(oldskb, hook, ip_hdrlen(oldskb), IPPROTO_TCP)) return; /* We need a linear, writeable skb. We also need to expand @@ -84,20 +82,21 @@ static void send_reset(struct sk_buff *o skb_shinfo(nskb)->gso_segs = 0; skb_shinfo(nskb)->gso_type = 0; - tcph = (struct tcphdr *)((u_int32_t*)nskb->nh.iph + nskb->nh.iph->ihl); + tcph = (struct tcphdr *)(skb_network_header(nskb) + ip_hdrlen(nskb)); /* Swap source and dest */ - tmp_addr = nskb->nh.iph->saddr; - nskb->nh.iph->saddr = nskb->nh.iph->daddr; - nskb->nh.iph->daddr = tmp_addr; + niph = ip_hdr(nskb); + tmp_addr = niph->saddr; + niph->saddr = niph->daddr; + niph->daddr = tmp_addr; tmp_port = tcph->source; tcph->source = tcph->dest; tcph->dest = tmp_port; /* Truncate to length (no data) */ tcph->doff = sizeof(struct tcphdr)/4; - skb_trim(nskb, nskb->nh.iph->ihl*4 + sizeof(struct tcphdr)); - nskb->nh.iph->tot_len = htons(nskb->len); + skb_trim(nskb, ip_hdrlen(nskb) + sizeof(struct tcphdr)); + niph->tot_len = htons(nskb->len); if (tcph->ack) { needs_ack = 0; @@ -105,9 +104,9 @@ static void send_reset(struct sk_buff *o tcph->ack_seq = 0; } else { needs_ack = 1; - tcph->ack_seq = htonl(ntohl(oth->seq) + oth->syn + oth->fin - + oldskb->len - oldskb->nh.iph->ihl*4 - - (oth->doff<<2)); + tcph->ack_seq = htonl(ntohl(oth->seq) + oth->syn + oth->fin + + oldskb->len - ip_hdrlen(oldskb) - + (oth->doff << 2)); tcph->seq = 0; } @@ -122,14 +121,13 @@ static void send_reset(struct sk_buff *o /* Adjust TCP checksum */ tcph->check = 0; tcph->check = tcp_v4_check(sizeof(struct tcphdr), - nskb->nh.iph->saddr, - nskb->nh.iph->daddr, + niph->saddr, niph->daddr, csum_partial((char *)tcph, sizeof(struct tcphdr), 0)); /* Set DF, id = 0 */ - nskb->nh.iph->frag_off = htons(IP_DF); - nskb->nh.iph->id = 0; + niph->frag_off = htons(IP_DF); + niph->id = 0; addr_type = RTN_UNSPEC; if (hook != NF_IP_FORWARD @@ -145,12 +143,11 @@ static void send_reset(struct sk_buff *o nskb->ip_summed = CHECKSUM_NONE; /* Adjust IP TTL */ - nskb->nh.iph->ttl = dst_metric(nskb->dst, RTAX_HOPLIMIT); + niph->ttl = dst_metric(nskb->dst, RTAX_HOPLIMIT); /* Adjust IP checksum */ - nskb->nh.iph->check = 0; - nskb->nh.iph->check = ip_fast_csum((unsigned char *)nskb->nh.iph, - nskb->nh.iph->ihl); + niph->check = 0; + niph->check = ip_fast_csum(skb_network_header(nskb), niph->ihl); /* "Never happens" */ if (nskb->len > dst_mtu(nskb->dst)) @@ -182,7 +179,7 @@ static unsigned int reject(struct sk_buf /* Our naive response construction doesn't deal with IP options, and probably shouldn't try. */ - if ((*pskb)->nh.iph->ihl<<2 != sizeof(struct iphdr)) + if (ip_hdrlen(*pskb) != sizeof(struct iphdr)) return NF_DROP; /* WARNING: This code causes reentry within iptables. diff -puN net/ipv4/netfilter/ipt_SAME.c~git-net net/ipv4/netfilter/ipt_SAME.c --- a/net/ipv4/netfilter/ipt_SAME.c~git-net +++ a/net/ipv4/netfilter/ipt_SAME.c @@ -7,21 +7,6 @@ * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as * published by the Free Software Foundation. - * - * 010320 Martin Josefsson - * * copied ipt_BALANCE.c to ipt_SAME.c and changed a few things. - * 010728 Martin Josefsson - * * added --nodst to not include destination-ip in new source - * calculations. - * * added some more sanity-checks. - * 010729 Martin Josefsson - * * fixed a buggy if-statement in same_check(), should have - * used ntohl() but didn't. - * * added support for multiple ranges. IPT_SAME_MAX_RANGE is - * defined in linux/include/linux/netfilter_ipv4/ipt_SAME.h - * and is currently set to 10. - * * added support for 1-address range, nice to have now that - * we have multiple ranges. */ #include #include @@ -35,11 +20,7 @@ #include #include #include -#ifdef CONFIG_NF_NAT_NEEDED #include -#else -#include -#endif #include MODULE_LICENSE("GPL"); @@ -138,17 +119,17 @@ same_target(struct sk_buff **pskb, const struct xt_target *target, const void *targinfo) { - struct ip_conntrack *ct; + struct nf_conn *ct; enum ip_conntrack_info ctinfo; u_int32_t tmpip, aindex; __be32 new_ip; const struct ipt_same_info *same = targinfo; - struct ip_nat_range newrange; - const struct ip_conntrack_tuple *t; + struct nf_nat_range newrange; + const struct nf_conntrack_tuple *t; - IP_NF_ASSERT(hooknum == NF_IP_PRE_ROUTING || + NF_CT_ASSERT(hooknum == NF_IP_PRE_ROUTING || hooknum == NF_IP_POST_ROUTING); - ct = ip_conntrack_get(*pskb, &ctinfo); + ct = nf_ct_get(*pskb, &ctinfo); t = &ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple; @@ -157,17 +138,10 @@ same_target(struct sk_buff **pskb, Here we calculate the index in same->iparray which holds the ipaddress we should use */ -#ifdef CONFIG_NF_NAT_NEEDED tmpip = ntohl(t->src.u3.ip); if (!(same->info & IPT_SAME_NODST)) tmpip += ntohl(t->dst.u3.ip); -#else - tmpip = ntohl(t->src.ip); - - if (!(same->info & IPT_SAME_NODST)) - tmpip += ntohl(t->dst.ip); -#endif aindex = tmpip % same->ipnum; new_ip = htonl(same->iparray[aindex]); @@ -178,13 +152,13 @@ same_target(struct sk_buff **pskb, NIPQUAD(new_ip)); /* Transfer from original range. */ - newrange = ((struct ip_nat_range) + newrange = ((struct nf_nat_range) { same->range[0].flags, new_ip, new_ip, /* FIXME: Use ports from correct range! */ same->range[0].min, same->range[0].max }); /* Hand modified range to generic setup. */ - return ip_nat_setup_info(ct, &newrange, hooknum); + return nf_nat_setup_info(ct, &newrange, hooknum); } static struct xt_target same_reg = { diff -puN net/ipv4/netfilter/ipt_TOS.c~git-net net/ipv4/netfilter/ipt_TOS.c --- a/net/ipv4/netfilter/ipt_TOS.c~git-net +++ a/net/ipv4/netfilter/ipt_TOS.c @@ -29,13 +29,13 @@ target(struct sk_buff **pskb, const void *targinfo) { const struct ipt_tos_target_info *tosinfo = targinfo; - struct iphdr *iph = (*pskb)->nh.iph; + struct iphdr *iph = ip_hdr(*pskb); if ((iph->tos & IPTOS_TOS_MASK) != tosinfo->tos) { __u8 oldtos; if (!skb_make_writable(pskb, sizeof(struct iphdr))) return NF_DROP; - iph = (*pskb)->nh.iph; + iph = ip_hdr(*pskb); oldtos = iph->tos; iph->tos = (iph->tos & IPTOS_PREC_MASK) | tosinfo->tos; nf_csum_replace2(&iph->check, htons(oldtos), htons(iph->tos)); diff -puN net/ipv4/netfilter/ipt_TTL.c~git-net net/ipv4/netfilter/ipt_TTL.c --- a/net/ipv4/netfilter/ipt_TTL.c~git-net +++ a/net/ipv4/netfilter/ipt_TTL.c @@ -32,7 +32,7 @@ ipt_ttl_target(struct sk_buff **pskb, if (!skb_make_writable(pskb, (*pskb)->len)) return NF_DROP; - iph = (*pskb)->nh.iph; + iph = ip_hdr(*pskb); switch (info->mode) { case IPT_TTL_SET: diff -puN net/ipv4/netfilter/ipt_ULOG.c~git-net net/ipv4/netfilter/ipt_ULOG.c --- a/net/ipv4/netfilter/ipt_ULOG.c~git-net +++ a/net/ipv4/netfilter/ipt_ULOG.c @@ -2,20 +2,6 @@ * netfilter module for userspace packet logging daemons * * (C) 2000-2004 by Harald Welte - * - * 2000/09/22 ulog-cprange feature added - * 2001/01/04 in-kernel queue as proposed by Sebastian Zander - * - * 2001/01/30 per-rule nlgroup conflicts with global queue. - * nlgroup now global (sysctl) - * 2001/04/19 ulog-queue reworked, now fixed buffer size specified at - * module loadtime -HW - * 2002/07/07 remove broken nflog_rcv() function -HW - * 2002/08/29 fix shifted/unshifted nlgroup bug -HW - * 2002/10/30 fix uninitialized mac_len field - - * 2004/10/25 fix erroneous calculation of 'len' parameter to NLMSG_PUT - * resulting in bogus 'error during NLMSG_PUT' messages. - * * (C) 1999-2001 Paul `Rusty' Russell * (C) 2002-2004 Netfilter Core Team * @@ -42,8 +28,6 @@ * flushtimeout: * Specify, after how many hundredths of a second the queue should be * flushed even if it is not full yet. - * - * ipt_ULOG.c,v 1.22 2002/10/30 09:07:31 laforge Exp */ #include @@ -187,6 +171,7 @@ static void ipt_ulog_packet(unsigned int ulog_packet_msg_t *pm; size_t size, copy_len; struct nlmsghdr *nlh; + struct timeval tv; /* ffs == find first bit set, necessary because userspace * is already shifting groupnumber, but we need unshifted. @@ -232,13 +217,14 @@ static void ipt_ulog_packet(unsigned int pm = NLMSG_DATA(nlh); /* We might not have a timestamp, get one */ - if (skb->tstamp.off_sec == 0) + if (skb->tstamp.tv64 == 0) __net_timestamp((struct sk_buff *)skb); /* copy hook, prefix, timestamp, payload, etc. */ pm->data_len = copy_len; - put_unaligned(skb->tstamp.off_sec, &pm->timestamp_sec); - put_unaligned(skb->tstamp.off_usec, &pm->timestamp_usec); + tv = ktime_to_timeval(skb->tstamp); + put_unaligned(tv.tv_sec, &pm->timestamp_sec); + put_unaligned(tv.tv_usec, &pm->timestamp_usec); put_unaligned(skb->mark, &pm->mark); pm->hook = hooknum; if (prefix != NULL) @@ -249,9 +235,9 @@ static void ipt_ulog_packet(unsigned int *(pm->prefix) = '\0'; if (in && in->hard_header_len > 0 - && skb->mac.raw != (void *) skb->nh.iph + && skb->mac_header != skb->network_header && in->hard_header_len <= ULOG_MAC_LEN) { - memcpy(pm->mac, skb->mac.raw, in->hard_header_len); + memcpy(pm->mac, skb_mac_header(skb), in->hard_header_len); pm->mac_len = in->hard_header_len; } else pm->mac_len = 0; @@ -363,12 +349,52 @@ static int ipt_ulog_checkentry(const cha return 1; } +#ifdef CONFIG_COMPAT +struct compat_ipt_ulog_info { + compat_uint_t nl_group; + compat_size_t copy_range; + compat_size_t qthreshold; + char prefix[ULOG_PREFIX_LEN]; +}; + +static void compat_from_user(void *dst, void *src) +{ + struct compat_ipt_ulog_info *cl = src; + struct ipt_ulog_info l = { + .nl_group = cl->nl_group, + .copy_range = cl->copy_range, + .qthreshold = cl->qthreshold, + }; + + memcpy(l.prefix, cl->prefix, sizeof(l.prefix)); + memcpy(dst, &l, sizeof(l)); +} + +static int compat_to_user(void __user *dst, void *src) +{ + struct ipt_ulog_info *l = src; + struct compat_ipt_ulog_info cl = { + .nl_group = l->nl_group, + .copy_range = l->copy_range, + .qthreshold = l->qthreshold, + }; + + memcpy(cl.prefix, l->prefix, sizeof(cl.prefix)); + return copy_to_user(dst, &cl, sizeof(cl)) ? -EFAULT : 0; +} +#endif /* CONFIG_COMPAT */ + static struct xt_target ipt_ulog_reg = { .name = "ULOG", .family = AF_INET, .target = ipt_ulog_target, .targetsize = sizeof(struct ipt_ulog_info), .checkentry = ipt_ulog_checkentry, +#ifdef CONFIG_COMPAT + .compatsize = sizeof(struct compat_ipt_ulog_info), + .compat_from_user = compat_from_user, + .compat_to_user = compat_to_user, +#endif .me = THIS_MODULE, }; @@ -390,14 +416,11 @@ static int __init ipt_ulog_init(void) } /* initialize ulog_buffers */ - for (i = 0; i < ULOG_MAXNLGROUPS; i++) { - init_timer(&ulog_buffers[i].timer); - ulog_buffers[i].timer.function = ulog_timer; - ulog_buffers[i].timer.data = i; - } + for (i = 0; i < ULOG_MAXNLGROUPS; i++) + setup_timer(&ulog_buffers[i].timer, ulog_timer, i); nflognl = netlink_kernel_create(NETLINK_NFLOG, ULOG_MAXNLGROUPS, NULL, - THIS_MODULE); + NULL, THIS_MODULE); if (!nflognl) return -ENOMEM; diff -puN net/ipv4/netfilter/ipt_addrtype.c~git-net net/ipv4/netfilter/ipt_addrtype.c --- a/net/ipv4/netfilter/ipt_addrtype.c~git-net +++ a/net/ipv4/netfilter/ipt_addrtype.c @@ -33,7 +33,7 @@ static int match(const struct sk_buff *s int offset, unsigned int protoff, int *hotdrop) { const struct ipt_addrtype_info *info = matchinfo; - const struct iphdr *iph = skb->nh.iph; + const struct iphdr *iph = ip_hdr(skb); int ret = 1; if (info->source) diff -puN net/ipv4/netfilter/ipt_ecn.c~git-net net/ipv4/netfilter/ipt_ecn.c --- a/net/ipv4/netfilter/ipt_ecn.c~git-net +++ a/net/ipv4/netfilter/ipt_ecn.c @@ -1,7 +1,5 @@ /* IP tables module for matching the value of the IPv4 and TCP ECN bits * - * ipt_ecn.c,v 1.3 2002/05/29 15:09:00 laforge Exp - * * (C) 2002 by Harald Welte * * This program is free software; you can redistribute it and/or modify @@ -11,6 +9,7 @@ #include #include +#include #include #include #include @@ -26,7 +25,7 @@ MODULE_LICENSE("GPL"); static inline int match_ip(const struct sk_buff *skb, const struct ipt_ecn_info *einfo) { - return ((skb->nh.iph->tos&IPT_ECN_IP_MASK) == einfo->ip_ect); + return (ip_hdr(skb)->tos & IPT_ECN_IP_MASK) == einfo->ip_ect; } static inline int match_tcp(const struct sk_buff *skb, @@ -38,8 +37,7 @@ static inline int match_tcp(const struct /* In practice, TCP match does this, so can't fail. But let's * be good citizens. */ - th = skb_header_pointer(skb, skb->nh.iph->ihl * 4, - sizeof(_tcph), &_tcph); + th = skb_header_pointer(skb, ip_hdrlen(skb), sizeof(_tcph), &_tcph); if (th == NULL) { *hotdrop = 0; return 0; @@ -80,7 +78,7 @@ static int match(const struct sk_buff *s return 0; if (info->operation & (IPT_ECN_OP_MATCH_ECE|IPT_ECN_OP_MATCH_CWR)) { - if (skb->nh.iph->protocol != IPPROTO_TCP) + if (ip_hdr(skb)->protocol != IPPROTO_TCP) return 0; if (!match_tcp(skb, info, hotdrop)) return 0; diff -puN net/ipv4/netfilter/ipt_iprange.c~git-net net/ipv4/netfilter/ipt_iprange.c --- a/net/ipv4/netfilter/ipt_iprange.c~git-net +++ a/net/ipv4/netfilter/ipt_iprange.c @@ -32,7 +32,7 @@ match(const struct sk_buff *skb, int offset, unsigned int protoff, int *hotdrop) { const struct ipt_iprange_info *info = matchinfo; - const struct iphdr *iph = skb->nh.iph; + const struct iphdr *iph = ip_hdr(skb); if (info->flags & IPRANGE_SRC) { if (((ntohl(iph->saddr) < ntohl(info->src.min_ip)) diff -puN net/ipv4/netfilter/ipt_recent.c~git-net net/ipv4/netfilter/ipt_recent.c --- a/net/ipv4/netfilter/ipt_recent.c~git-net +++ a/net/ipv4/netfilter/ipt_recent.c @@ -183,11 +183,11 @@ ipt_recent_match(const struct sk_buff *s int ret = info->invert; if (info->side == IPT_RECENT_DEST) - addr = skb->nh.iph->daddr; + addr = ip_hdr(skb)->daddr; else - addr = skb->nh.iph->saddr; + addr = ip_hdr(skb)->saddr; - ttl = skb->nh.iph->ttl; + ttl = ip_hdr(skb)->ttl; /* use TTL as seen before forwarding */ if (out && !skb->sk) ttl++; diff -puN net/ipv4/netfilter/ipt_tos.c~git-net net/ipv4/netfilter/ipt_tos.c --- a/net/ipv4/netfilter/ipt_tos.c~git-net +++ a/net/ipv4/netfilter/ipt_tos.c @@ -30,7 +30,7 @@ match(const struct sk_buff *skb, { const struct ipt_tos_info *info = matchinfo; - return (skb->nh.iph->tos == info->tos) ^ info->invert; + return (ip_hdr(skb)->tos == info->tos) ^ info->invert; } static struct xt_match tos_match = { diff -puN net/ipv4/netfilter/ipt_ttl.c~git-net net/ipv4/netfilter/ipt_ttl.c --- a/net/ipv4/netfilter/ipt_ttl.c~git-net +++ a/net/ipv4/netfilter/ipt_ttl.c @@ -1,7 +1,5 @@ /* IP tables module for matching the value of the TTL * - * ipt_ttl.c,v 1.5 2000/11/13 11:16:08 laforge Exp - * * (C) 2000,2001 by Harald Welte * * This program is free software; you can redistribute it and/or modify @@ -26,19 +24,20 @@ static int match(const struct sk_buff *s int offset, unsigned int protoff, int *hotdrop) { const struct ipt_ttl_info *info = matchinfo; + const u8 ttl = ip_hdr(skb)->ttl; switch (info->mode) { case IPT_TTL_EQ: - return (skb->nh.iph->ttl == info->ttl); + return (ttl == info->ttl); break; case IPT_TTL_NE: - return (!(skb->nh.iph->ttl == info->ttl)); + return (!(ttl == info->ttl)); break; case IPT_TTL_LT: - return (skb->nh.iph->ttl < info->ttl); + return (ttl < info->ttl); break; case IPT_TTL_GT: - return (skb->nh.iph->ttl > info->ttl); + return (ttl > info->ttl); break; default: printk(KERN_WARNING "ipt_ttl: unknown mode %d\n", diff -puN net/ipv4/netfilter/iptable_filter.c~git-net net/ipv4/netfilter/iptable_filter.c --- a/net/ipv4/netfilter/iptable_filter.c~git-net +++ a/net/ipv4/netfilter/iptable_filter.c @@ -13,6 +13,7 @@ #include #include #include +#include MODULE_LICENSE("GPL"); MODULE_AUTHOR("Netfilter Core Team "); @@ -102,7 +103,7 @@ ipt_local_out_hook(unsigned int hook, { /* root is playing with raw sockets. */ if ((*pskb)->len < sizeof(struct iphdr) - || (*pskb)->nh.iph->ihl * 4 < sizeof(struct iphdr)) { + || ip_hdrlen(*pskb) < sizeof(struct iphdr)) { if (net_ratelimit()) printk("ipt_hook: happy cracking.\n"); return NF_ACCEPT; diff -puN net/ipv4/netfilter/iptable_mangle.c~git-net net/ipv4/netfilter/iptable_mangle.c --- a/net/ipv4/netfilter/iptable_mangle.c~git-net +++ a/net/ipv4/netfilter/iptable_mangle.c @@ -7,8 +7,6 @@ * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as * published by the Free Software Foundation. - * - * Extended to all five netfilter hooks by Brad Chapman & Harald Welte */ #include #include @@ -17,6 +15,7 @@ #include #include #include +#include MODULE_LICENSE("GPL"); MODULE_AUTHOR("Netfilter Core Team "); @@ -130,13 +129,14 @@ ipt_local_hook(unsigned int hook, int (*okfn)(struct sk_buff *)) { unsigned int ret; + const struct iphdr *iph; u_int8_t tos; __be32 saddr, daddr; u_int32_t mark; /* root is playing with raw sockets. */ if ((*pskb)->len < sizeof(struct iphdr) - || (*pskb)->nh.iph->ihl * 4 < sizeof(struct iphdr)) { + || ip_hdrlen(*pskb) < sizeof(struct iphdr)) { if (net_ratelimit()) printk("ipt_hook: happy cracking.\n"); return NF_ACCEPT; @@ -144,19 +144,23 @@ ipt_local_hook(unsigned int hook, /* Save things which could affect route */ mark = (*pskb)->mark; - saddr = (*pskb)->nh.iph->saddr; - daddr = (*pskb)->nh.iph->daddr; - tos = (*pskb)->nh.iph->tos; + iph = ip_hdr(*pskb); + saddr = iph->saddr; + daddr = iph->daddr; + tos = iph->tos; ret = ipt_do_table(pskb, hook, in, out, &packet_mangler); /* Reroute for ANY change. */ - if (ret != NF_DROP && ret != NF_STOLEN && ret != NF_QUEUE - && ((*pskb)->nh.iph->saddr != saddr - || (*pskb)->nh.iph->daddr != daddr - || (*pskb)->mark != mark - || (*pskb)->nh.iph->tos != tos)) - if (ip_route_me_harder(pskb, RTN_UNSPEC)) - ret = NF_DROP; + if (ret != NF_DROP && ret != NF_STOLEN && ret != NF_QUEUE) { + iph = ip_hdr(*pskb); + + if (iph->saddr != saddr || + iph->daddr != daddr || + (*pskb)->mark != mark || + iph->tos != tos) + if (ip_route_me_harder(pskb, RTN_UNSPEC)) + ret = NF_DROP; + } return ret; } diff -puN net/ipv4/netfilter/nf_conntrack_l3proto_ipv4.c~git-net net/ipv4/netfilter/nf_conntrack_l3proto_ipv4.c --- a/net/ipv4/netfilter/nf_conntrack_l3proto_ipv4.c~git-net +++ a/net/ipv4/netfilter/nf_conntrack_l3proto_ipv4.c @@ -4,14 +4,6 @@ * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as * published by the Free Software Foundation. - * - * 16 Dec 2003: Yasuyuki Kozakai @USAGI - * - move L3 protocol dependent part to this file. - * 23 Mar 2004: Yasuyuki Kozakai @USAGI - * - add get_features() to support various size of conntrack - * structures. - * - * Derived from net/ipv4/netfilter/ip_conntrack_standalone.c */ #include @@ -87,7 +79,7 @@ nf_ct_ipv4_gather_frags(struct sk_buff * local_bh_enable(); if (skb) - ip_send_check(skb->nh.iph); + ip_send_check(ip_hdr(skb)); return skb; } @@ -97,16 +89,16 @@ ipv4_prepare(struct sk_buff **pskb, unsi u_int8_t *protonum) { /* Never happen */ - if ((*pskb)->nh.iph->frag_off & htons(IP_OFFSET)) { + if (ip_hdr(*pskb)->frag_off & htons(IP_OFFSET)) { if (net_ratelimit()) { printk(KERN_ERR "ipv4_prepare: Frag of proto %u (hook=%u)\n", - (*pskb)->nh.iph->protocol, hooknum); + ip_hdr(*pskb)->protocol, hooknum); } return -NF_DROP; } - *dataoff = (*pskb)->nh.raw - (*pskb)->data + (*pskb)->nh.iph->ihl*4; - *protonum = (*pskb)->nh.iph->protocol; + *dataoff = skb_network_offset(*pskb) + ip_hdrlen(*pskb); + *protonum = ip_hdr(*pskb)->protocol; return NF_ACCEPT; } @@ -152,9 +144,8 @@ static unsigned int ipv4_conntrack_help( return NF_ACCEPT; return help->helper->help(pskb, - (*pskb)->nh.raw - (*pskb)->data - + (*pskb)->nh.iph->ihl*4, - ct, ctinfo); + skb_network_offset(*pskb) + ip_hdrlen(*pskb), + ct, ctinfo); } static unsigned int ipv4_conntrack_defrag(unsigned int hooknum, @@ -171,7 +162,7 @@ static unsigned int ipv4_conntrack_defra #endif /* Gather fragments. */ - if ((*pskb)->nh.iph->frag_off & htons(IP_MF|IP_OFFSET)) { + if (ip_hdr(*pskb)->frag_off & htons(IP_MF | IP_OFFSET)) { *pskb = nf_ct_ipv4_gather_frags(*pskb, hooknum == NF_IP_PRE_ROUTING ? IP_DEFRAG_CONNTRACK_IN : @@ -199,7 +190,7 @@ static unsigned int ipv4_conntrack_local { /* root is playing with raw sockets. */ if ((*pskb)->len < sizeof(struct iphdr) - || (*pskb)->nh.iph->ihl * 4 < sizeof(struct iphdr)) { + || ip_hdrlen(*pskb) < sizeof(struct iphdr)) { if (net_ratelimit()) printk("ipt_hook: happy cracking.\n"); return NF_ACCEPT; diff -puN net/ipv4/netfilter/nf_conntrack_proto_icmp.c~git-net net/ipv4/netfilter/nf_conntrack_proto_icmp.c --- a/net/ipv4/netfilter/nf_conntrack_proto_icmp.c~git-net +++ a/net/ipv4/netfilter/nf_conntrack_proto_icmp.c @@ -4,11 +4,6 @@ * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as * published by the Free Software Foundation. - * - * 16 Dec 2003: Yasuyuki Kozakai @USAGI - * - enable working with Layer 3 protocol independent connection tracking. - * - * Derived from net/ipv4/netfilter/ip_conntrack_proto_icmp.c */ #include @@ -158,7 +153,7 @@ icmp_error_message(struct sk_buff *skb, NF_CT_ASSERT(skb->nfct == NULL); /* Not enough header? */ - inside = skb_header_pointer(skb, skb->nh.iph->ihl*4, sizeof(_in), &_in); + inside = skb_header_pointer(skb, ip_hdrlen(skb), sizeof(_in), &_in); if (inside == NULL) return -NF_ACCEPT; @@ -172,7 +167,7 @@ icmp_error_message(struct sk_buff *skb, /* rcu_read_lock()ed by nf_hook_slow */ innerproto = __nf_ct_l4proto_find(PF_INET, inside->ip.protocol); - dataoff = skb->nh.iph->ihl*4 + sizeof(inside->icmp); + dataoff = ip_hdrlen(skb) + sizeof(inside->icmp); /* Are they talking about one of our connections? */ if (!nf_ct_get_tuple(skb, dataoff, dataoff + inside->ip.ihl*4, PF_INET, inside->ip.protocol, &origtuple, @@ -227,7 +222,7 @@ icmp_error(struct sk_buff *skb, unsigned struct icmphdr _ih, *icmph; /* Not enough header? */ - icmph = skb_header_pointer(skb, skb->nh.iph->ihl*4, sizeof(_ih), &_ih); + icmph = skb_header_pointer(skb, ip_hdrlen(skb), sizeof(_ih), &_ih); if (icmph == NULL) { if (LOG_INVALID(IPPROTO_ICMP)) nf_log_packet(PF_INET, 0, skb, NULL, NULL, NULL, diff -puN net/ipv4/netfilter/nf_nat_core.c~git-net net/ipv4/netfilter/nf_nat_core.c --- a/net/ipv4/netfilter/nf_nat_core.c~git-net +++ a/net/ipv4/netfilter/nf_nat_core.c @@ -431,7 +431,7 @@ int nf_nat_icmp_reply_translation(struct } *inside; struct nf_conntrack_l4proto *l4proto; struct nf_conntrack_tuple inner, target; - int hdrlen = (*pskb)->nh.iph->ihl * 4; + int hdrlen = ip_hdrlen(*pskb); enum ip_conntrack_dir dir = CTINFO2DIR(ctinfo); unsigned long statusbit; enum nf_nat_manip_type manip = HOOK2MANIP(hooknum); @@ -439,7 +439,7 @@ int nf_nat_icmp_reply_translation(struct if (!skb_make_writable(pskb, hdrlen + sizeof(*inside))) return 0; - inside = (void *)(*pskb)->data + (*pskb)->nh.iph->ihl*4; + inside = (void *)(*pskb)->data + ip_hdrlen(*pskb); /* We're actually going to mangle it beyond trivial checksum adjustment, so make sure the current checksum is correct. */ @@ -469,9 +469,9 @@ int nf_nat_icmp_reply_translation(struct l4proto = __nf_ct_l4proto_find(PF_INET, inside->ip.protocol); if (!nf_ct_get_tuple(*pskb, - (*pskb)->nh.iph->ihl*4 + sizeof(struct icmphdr), - (*pskb)->nh.iph->ihl*4 + - sizeof(struct icmphdr) + inside->ip.ihl*4, + ip_hdrlen(*pskb) + sizeof(struct icmphdr), + (ip_hdrlen(*pskb) + + sizeof(struct icmphdr) + inside->ip.ihl * 4), (u_int16_t)AF_INET, inside->ip.protocol, &inner, l3proto, l4proto)) @@ -483,14 +483,14 @@ int nf_nat_icmp_reply_translation(struct packet: PREROUTING (DST manip), routing produces ICMP, goes through POSTROUTING (which must correct the DST manip). */ if (!manip_pkt(inside->ip.protocol, pskb, - (*pskb)->nh.iph->ihl*4 + sizeof(inside->icmp), + ip_hdrlen(*pskb) + sizeof(inside->icmp), &ct->tuplehash[!dir].tuple, !manip)) return 0; if ((*pskb)->ip_summed != CHECKSUM_PARTIAL) { /* Reloading "inside" here since manip_pkt inner. */ - inside = (void *)(*pskb)->data + (*pskb)->nh.iph->ihl*4; + inside = (void *)(*pskb)->data + ip_hdrlen(*pskb); inside->icmp.checksum = 0; inside->icmp.checksum = csum_fold(skb_checksum(*pskb, hdrlen, diff -puN net/ipv4/netfilter/nf_nat_h323.c~git-net net/ipv4/netfilter/nf_nat_h323.c --- a/net/ipv4/netfilter/nf_nat_h323.c~git-net +++ a/net/ipv4/netfilter/nf_nat_h323.c @@ -33,7 +33,7 @@ static int set_addr(struct sk_buff **psk unsigned int addroff, __be32 ip, __be16 port) { enum ip_conntrack_info ctinfo; - struct nf_conn *ct = ip_conntrack_get(*pskb, &ctinfo); + struct nf_conn *ct = nf_ct_get(*pskb, &ctinfo); struct { __be32 ip; __be16 port; @@ -44,7 +44,7 @@ static int set_addr(struct sk_buff **psk buf.port = port; addroff += dataoff; - if ((*pskb)->nh.iph->protocol == IPPROTO_TCP) { + if (ip_hdr(*pskb)->protocol == IPPROTO_TCP) { if (!nf_nat_mangle_tcp_packet(pskb, ct, ctinfo, addroff, sizeof(buf), (char *) &buf, sizeof(buf))) { @@ -55,11 +55,11 @@ static int set_addr(struct sk_buff **psk } /* Relocate data pointer */ - th = skb_header_pointer(*pskb, (*pskb)->nh.iph->ihl * 4, + th = skb_header_pointer(*pskb, ip_hdrlen(*pskb), sizeof(_tcph), &_tcph); if (th == NULL) return -1; - *data = (*pskb)->data + (*pskb)->nh.iph->ihl * 4 + + *data = (*pskb)->data + ip_hdrlen(*pskb) + th->doff * 4 + dataoff; } else { if (!nf_nat_mangle_udp_packet(pskb, ct, ctinfo, @@ -73,8 +73,8 @@ static int set_addr(struct sk_buff **psk /* nf_nat_mangle_udp_packet uses skb_make_writable() to copy * or pull everything in a linear buffer, so we can safely * use the skb pointers now */ - *data = (*pskb)->data + (*pskb)->nh.iph->ihl * 4 + - sizeof(struct udphdr); + *data = ((*pskb)->data + ip_hdrlen(*pskb) + + sizeof(struct udphdr)); } return 0; @@ -383,7 +383,7 @@ static int nat_h245(struct sk_buff **psk static void ip_nat_q931_expect(struct nf_conn *new, struct nf_conntrack_expect *this) { - struct ip_nat_range range; + struct nf_nat_range range; if (this->tuple.src.u3.ip != 0) { /* Only accept calls from GK */ nf_nat_follow_master(new, this); diff -puN net/ipv4/netfilter/nf_nat_helper.c~git-net net/ipv4/netfilter/nf_nat_helper.c --- a/net/ipv4/netfilter/nf_nat_helper.c~git-net +++ a/net/ipv4/netfilter/nf_nat_helper.c @@ -87,12 +87,13 @@ static void mangle_contents(struct sk_bu unsigned char *data; BUG_ON(skb_is_nonlinear(skb)); - data = (unsigned char *)skb->nh.iph + dataoff; + data = skb_network_header(skb) + dataoff; /* move post-replacement */ memmove(data + match_offset + rep_len, data + match_offset + match_len, - skb->tail - (data + match_offset + match_len)); + skb->tail - (skb->network_header + dataoff + + match_offset + match_len)); /* insert data from buffer */ memcpy(data + match_offset, rep_buffer, rep_len); @@ -111,8 +112,8 @@ static void mangle_contents(struct sk_bu } /* fix IP hdr checksum information */ - skb->nh.iph->tot_len = htons(skb->len); - ip_send_check(skb->nh.iph); + ip_hdr(skb)->tot_len = htons(skb->len); + ip_send_check(ip_hdr(skb)); } /* Unusual, but possible case. */ @@ -152,6 +153,7 @@ nf_nat_mangle_tcp_packet(struct sk_buff const char *rep_buffer, unsigned int rep_len) { + struct rtable *rt = (struct rtable *)(*pskb)->dst; struct iphdr *iph; struct tcphdr *tcph; int oldlen, datalen; @@ -166,7 +168,7 @@ nf_nat_mangle_tcp_packet(struct sk_buff SKB_LINEAR_ASSERT(*pskb); - iph = (*pskb)->nh.iph; + iph = ip_hdr(*pskb); tcph = (void *)iph + iph->ihl*4; oldlen = (*pskb)->len - iph->ihl*4; @@ -175,11 +177,22 @@ nf_nat_mangle_tcp_packet(struct sk_buff datalen = (*pskb)->len - iph->ihl*4; if ((*pskb)->ip_summed != CHECKSUM_PARTIAL) { - tcph->check = 0; - tcph->check = tcp_v4_check(datalen, - iph->saddr, iph->daddr, - csum_partial((char *)tcph, - datalen, 0)); + if (!(rt->rt_flags & RTCF_LOCAL) && + (*pskb)->dev->features & NETIF_F_ALL_CSUM) { + (*pskb)->ip_summed = CHECKSUM_PARTIAL; + (*pskb)->csum_start = skb_headroom(*pskb) + + skb_network_offset(*pskb) + + iph->ihl * 4; + (*pskb)->csum_offset = offsetof(struct tcphdr, check); + tcph->check = ~tcp_v4_check(datalen, + iph->saddr, iph->daddr, 0); + } else { + tcph->check = 0; + tcph->check = tcp_v4_check(datalen, + iph->saddr, iph->daddr, + csum_partial((char *)tcph, + datalen, 0)); + } } else nf_proto_csum_replace2(&tcph->check, *pskb, htons(oldlen), htons(datalen), 1); @@ -190,7 +203,7 @@ nf_nat_mangle_tcp_packet(struct sk_buff (int)rep_len - (int)match_len, ct, ctinfo); /* Tell TCP window tracking about seq change */ - nf_conntrack_tcp_update(*pskb, (*pskb)->nh.iph->ihl*4, + nf_conntrack_tcp_update(*pskb, ip_hdrlen(*pskb), ct, CTINFO2DIR(ctinfo)); } return 1; @@ -216,12 +229,13 @@ nf_nat_mangle_udp_packet(struct sk_buff const char *rep_buffer, unsigned int rep_len) { + struct rtable *rt = (struct rtable *)(*pskb)->dst; struct iphdr *iph; struct udphdr *udph; int datalen, oldlen; /* UDP helpers might accidentally mangle the wrong packet */ - iph = (*pskb)->nh.iph; + iph = ip_hdr(*pskb); if ((*pskb)->len < iph->ihl*4 + sizeof(*udph) + match_offset + match_len) return 0; @@ -234,7 +248,7 @@ nf_nat_mangle_udp_packet(struct sk_buff !enlarge_skb(pskb, rep_len - match_len)) return 0; - iph = (*pskb)->nh.iph; + iph = ip_hdr(*pskb); udph = (void *)iph + iph->ihl*4; oldlen = (*pskb)->len - iph->ihl*4; @@ -250,13 +264,25 @@ nf_nat_mangle_udp_packet(struct sk_buff return 1; if ((*pskb)->ip_summed != CHECKSUM_PARTIAL) { - udph->check = 0; - udph->check = csum_tcpudp_magic(iph->saddr, iph->daddr, - datalen, IPPROTO_UDP, - csum_partial((char *)udph, - datalen, 0)); - if (!udph->check) - udph->check = CSUM_MANGLED_0; + if (!(rt->rt_flags & RTCF_LOCAL) && + (*pskb)->dev->features & NETIF_F_ALL_CSUM) { + (*pskb)->ip_summed = CHECKSUM_PARTIAL; + (*pskb)->csum_start = skb_headroom(*pskb) + + skb_network_offset(*pskb) + + iph->ihl * 4; + (*pskb)->csum_offset = offsetof(struct udphdr, check); + udph->check = ~csum_tcpudp_magic(iph->saddr, iph->daddr, + datalen, IPPROTO_UDP, + 0); + } else { + udph->check = 0; + udph->check = csum_tcpudp_magic(iph->saddr, iph->daddr, + datalen, IPPROTO_UDP, + csum_partial((char *)udph, + datalen, 0)); + if (!udph->check) + udph->check = CSUM_MANGLED_0; + } } else nf_proto_csum_replace2(&udph->check, *pskb, htons(oldlen), htons(datalen), 1); @@ -318,8 +344,8 @@ nf_nat_sack_adjust(struct sk_buff **pskb unsigned int dir, optoff, optend; struct nf_conn_nat *nat = nfct_nat(ct); - optoff = (*pskb)->nh.iph->ihl*4 + sizeof(struct tcphdr); - optend = (*pskb)->nh.iph->ihl*4 + tcph->doff*4; + optoff = ip_hdrlen(*pskb) + sizeof(struct tcphdr); + optend = ip_hdrlen(*pskb) + tcph->doff * 4; if (!skb_make_writable(pskb, optend)) return 0; @@ -371,10 +397,10 @@ nf_nat_seq_adjust(struct sk_buff **pskb, this_way = &nat->info.seq[dir]; other_way = &nat->info.seq[!dir]; - if (!skb_make_writable(pskb, (*pskb)->nh.iph->ihl*4+sizeof(*tcph))) + if (!skb_make_writable(pskb, ip_hdrlen(*pskb) + sizeof(*tcph))) return 0; - tcph = (void *)(*pskb)->data + (*pskb)->nh.iph->ihl*4; + tcph = (void *)(*pskb)->data + ip_hdrlen(*pskb); if (after(ntohl(tcph->seq), this_way->correction_pos)) newseq = htonl(ntohl(tcph->seq) + this_way->offset_after); else @@ -399,7 +425,7 @@ nf_nat_seq_adjust(struct sk_buff **pskb, if (!nf_nat_sack_adjust(pskb, tcph, ct, ctinfo)) return 0; - nf_conntrack_tcp_update(*pskb, (*pskb)->nh.iph->ihl*4, ct, dir); + nf_conntrack_tcp_update(*pskb, ip_hdrlen(*pskb), ct, dir); return 1; } diff -puN net/ipv4/netfilter/nf_nat_pptp.c~git-net net/ipv4/netfilter/nf_nat_pptp.c --- a/net/ipv4/netfilter/nf_nat_pptp.c~git-net +++ a/net/ipv4/netfilter/nf_nat_pptp.c @@ -53,7 +53,7 @@ static void pptp_nat_expected(struct nf_ struct nf_conntrack_tuple t; struct nf_ct_pptp_master *ct_pptp_info; struct nf_nat_pptp *nat_pptp_info; - struct ip_nat_range range; + struct nf_nat_range range; ct_pptp_info = &nfct_help(master)->help.ct_pptp_info; nat_pptp_info = &nfct_nat(master)->help.nat_pptp_info; diff -puN net/ipv4/netfilter/nf_nat_rule.c~git-net net/ipv4/netfilter/nf_nat_rule.c --- a/net/ipv4/netfilter/nf_nat_rule.c~git-net +++ a/net/ipv4/netfilter/nf_nat_rule.c @@ -191,7 +191,7 @@ static unsigned int ipt_dnat_target(stru if (hooknum == NF_IP_LOCAL_OUT && mr->range[0].flags & IP_NAT_RANGE_MAP_IPS) - warn_if_extra_mangle((*pskb)->nh.iph->daddr, + warn_if_extra_mangle(ip_hdr(*pskb)->daddr, mr->range[0].min_ip); return nf_nat_setup_info(ct, &mr->range[0], hooknum); diff -puN net/ipv4/netfilter/nf_nat_sip.c~git-net net/ipv4/netfilter/nf_nat_sip.c --- a/net/ipv4/netfilter/nf_nat_sip.c~git-net +++ a/net/ipv4/netfilter/nf_nat_sip.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include @@ -92,7 +93,7 @@ static int map_sip_addr(struct sk_buff * if (!nf_nat_mangle_udp_packet(pskb, ct, ctinfo, matchoff, matchlen, addr, addrlen)) return 0; - *dptr = (*pskb)->data + (*pskb)->nh.iph->ihl*4 + sizeof(struct udphdr); + *dptr = (*pskb)->data + ip_hdrlen(*pskb) + sizeof(struct udphdr); return 1; } @@ -106,7 +107,7 @@ static unsigned int ip_nat_sip(struct sk struct addr_map map; int dataoff, datalen; - dataoff = (*pskb)->nh.iph->ihl*4 + sizeof(struct udphdr); + dataoff = ip_hdrlen(*pskb) + sizeof(struct udphdr); datalen = (*pskb)->len - dataoff; if (datalen < sizeof("SIP/2.0") - 1) return NF_DROP; @@ -155,7 +156,7 @@ static unsigned int mangle_sip_packet(st return 0; /* We need to reload this. Thanks Patrick. */ - *dptr = (*pskb)->data + (*pskb)->nh.iph->ihl*4 + sizeof(struct udphdr); + *dptr = (*pskb)->data + ip_hdrlen(*pskb) + sizeof(struct udphdr); return 1; } @@ -168,7 +169,7 @@ static int mangle_content_len(struct sk_ char buffer[sizeof("65536")]; int bufflen; - dataoff = (*pskb)->nh.iph->ihl*4 + sizeof(struct udphdr); + dataoff = ip_hdrlen(*pskb) + sizeof(struct udphdr); /* Get actual SDP lenght */ if (ct_sip_get_info(ct, dptr, (*pskb)->len - dataoff, &matchoff, @@ -200,7 +201,7 @@ static unsigned int mangle_sdp(struct sk char buffer[sizeof("nnn.nnn.nnn.nnn")]; unsigned int dataoff, bufflen; - dataoff = (*pskb)->nh.iph->ihl*4 + sizeof(struct udphdr); + dataoff = ip_hdrlen(*pskb) + sizeof(struct udphdr); /* Mangle owner and contact info. */ bufflen = sprintf(buffer, "%u.%u.%u.%u", NIPQUAD(newip)); diff -puN net/ipv4/netfilter/nf_nat_snmp_basic.c~git-net net/ipv4/netfilter/nf_nat_snmp_basic.c --- a/net/ipv4/netfilter/nf_nat_snmp_basic.c~git-net +++ a/net/ipv4/netfilter/nf_nat_snmp_basic.c @@ -38,10 +38,6 @@ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA * * Author: James Morris - * - * Updates: - * 2000-08-06: Convert to new helper API (Harald Welte). - * */ #include #include @@ -1194,7 +1190,7 @@ static int snmp_translate(struct nf_conn enum ip_conntrack_info ctinfo, struct sk_buff **pskb) { - struct iphdr *iph = (*pskb)->nh.iph; + struct iphdr *iph = ip_hdr(*pskb); struct udphdr *udph = (struct udphdr *)((__be32 *)iph + iph->ihl); u_int16_t udplen = ntohs(udph->len); u_int16_t paylen = udplen - sizeof(struct udphdr); @@ -1235,7 +1231,7 @@ static int help(struct sk_buff **pskb, u { int dir = CTINFO2DIR(ctinfo); unsigned int ret; - struct iphdr *iph = (*pskb)->nh.iph; + struct iphdr *iph = ip_hdr(*pskb); struct udphdr *udph = (struct udphdr *)((u_int32_t *)iph + iph->ihl); /* SNMP replies and originating SNMP traps get mangled */ diff -puN net/ipv4/netfilter/nf_nat_standalone.c~git-net net/ipv4/netfilter/nf_nat_standalone.c --- a/net/ipv4/netfilter/nf_nat_standalone.c~git-net +++ a/net/ipv4/netfilter/nf_nat_standalone.c @@ -86,8 +86,7 @@ nf_nat_fn(unsigned int hooknum, /* We never see fragments: conntrack defrags on pre-routing and local-out, and nf_nat_out protects post-routing. */ - NF_CT_ASSERT(!((*pskb)->nh.iph->frag_off - & htons(IP_MF|IP_OFFSET))); + NF_CT_ASSERT(!(ip_hdr(*pskb)->frag_off & htons(IP_MF | IP_OFFSET))); ct = nf_ct_get(*pskb, &ctinfo); /* Can't track? It's not due to stress, or conntrack would @@ -98,11 +97,10 @@ nf_nat_fn(unsigned int hooknum, /* Exception: ICMP redirect to new connection (not in hash table yet). We must not let this through, in case we're doing NAT to the same network. */ - if ((*pskb)->nh.iph->protocol == IPPROTO_ICMP) { + if (ip_hdr(*pskb)->protocol == IPPROTO_ICMP) { struct icmphdr _hdr, *hp; - hp = skb_header_pointer(*pskb, - (*pskb)->nh.iph->ihl*4, + hp = skb_header_pointer(*pskb, ip_hdrlen(*pskb), sizeof(_hdr), &_hdr); if (hp != NULL && hp->type == ICMP_REDIRECT) @@ -122,7 +120,7 @@ nf_nat_fn(unsigned int hooknum, switch (ctinfo) { case IP_CT_RELATED: case IP_CT_RELATED+IP_CT_IS_REPLY: - if ((*pskb)->nh.iph->protocol == IPPROTO_ICMP) { + if (ip_hdr(*pskb)->protocol == IPPROTO_ICMP) { if (!nf_nat_icmp_reply_translation(ct, ctinfo, hooknum, pskb)) return NF_DROP; @@ -177,11 +175,11 @@ nf_nat_in(unsigned int hooknum, int (*okfn)(struct sk_buff *)) { unsigned int ret; - __be32 daddr = (*pskb)->nh.iph->daddr; + __be32 daddr = ip_hdr(*pskb)->daddr; ret = nf_nat_fn(hooknum, pskb, in, out, okfn); if (ret != NF_DROP && ret != NF_STOLEN && - daddr != (*pskb)->nh.iph->daddr) { + daddr != ip_hdr(*pskb)->daddr) { dst_release((*pskb)->dst); (*pskb)->dst = NULL; } @@ -203,7 +201,7 @@ nf_nat_out(unsigned int hooknum, /* root is playing with raw sockets. */ if ((*pskb)->len < sizeof(struct iphdr) || - (*pskb)->nh.iph->ihl * 4 < sizeof(struct iphdr)) + ip_hdrlen(*pskb) < sizeof(struct iphdr)) return NF_ACCEPT; ret = nf_nat_fn(hooknum, pskb, in, out, okfn); @@ -236,7 +234,7 @@ nf_nat_local_fn(unsigned int hooknum, /* root is playing with raw sockets. */ if ((*pskb)->len < sizeof(struct iphdr) || - (*pskb)->nh.iph->ihl * 4 < sizeof(struct iphdr)) + ip_hdrlen(*pskb) < sizeof(struct iphdr)) return NF_ACCEPT; ret = nf_nat_fn(hooknum, pskb, in, out, okfn); diff -puN net/ipv4/proc.c~git-net net/ipv4/proc.c --- a/net/ipv4/proc.c~git-net +++ a/net/ipv4/proc.c @@ -391,3 +391,28 @@ out_netstat: goto out; } +int snmp_mib_init(void *ptr[2], size_t mibsize, size_t mibalign) +{ + BUG_ON(ptr == NULL); + ptr[0] = __alloc_percpu(mibsize); + if (!ptr[0]) + goto err0; + ptr[1] = __alloc_percpu(mibsize); + if (!ptr[1]) + goto err1; + return 0; +err1: + free_percpu(ptr[0]); + ptr[0] = NULL; +err0: + return -ENOMEM; +} + +void snmp_mib_free(void *ptr[2]) +{ + BUG_ON(ptr == NULL); + free_percpu(ptr[0]); + free_percpu(ptr[1]); + ptr[0] = ptr[1] = NULL; +} + diff -puN net/ipv4/protocol.c~git-net net/ipv4/protocol.c --- a/net/ipv4/protocol.c~git-net +++ a/net/ipv4/protocol.c @@ -45,7 +45,7 @@ #include #include -struct net_protocol *inet_protos[MAX_INET_PROTOS]; +struct net_protocol *inet_protos[MAX_INET_PROTOS] ____cacheline_aligned_in_smp; static DEFINE_SPINLOCK(inet_proto_lock); /* diff -puN net/ipv4/raw.c~git-net net/ipv4/raw.c --- a/net/ipv4/raw.c~git-net +++ a/net/ipv4/raw.c @@ -132,7 +132,7 @@ static __inline__ int icmp_filter(struct if (!pskb_may_pull(skb, sizeof(struct icmphdr))) return 1; - type = skb->h.icmph->type; + type = icmp_hdr(skb)->type; if (type < 32) { __u32 data = raw_sk(sk)->filter.data; @@ -184,8 +184,8 @@ out: void raw_err (struct sock *sk, struct sk_buff *skb, u32 info) { struct inet_sock *inet = inet_sk(sk); - int type = skb->h.icmph->type; - int code = skb->h.icmph->code; + const int type = icmp_hdr(skb)->type; + const int code = icmp_hdr(skb)->code; int err = 0; int harderr = 0; @@ -256,7 +256,7 @@ int raw_rcv(struct sock *sk, struct sk_b } nf_reset(skb); - skb_push(skb, skb->data - skb->nh.raw); + skb_push(skb, skb->data - skb_network_header(skb)); raw_rcv_skb(sk, skb); return 0; @@ -291,11 +291,13 @@ static int raw_send_hdrinc(struct sock * skb->priority = sk->sk_priority; skb->dst = dst_clone(&rt->u.dst); - skb->nh.iph = iph = (struct iphdr *)skb_put(skb, length); + skb_reset_network_header(skb); + iph = ip_hdr(skb); + skb_put(skb, length); skb->ip_summed = CHECKSUM_NONE; - skb->h.raw = skb->nh.raw; + skb->transport_header = skb->network_header; err = memcpy_fromiovecend((void *)iph, from, 0, length); if (err) goto error_fault; @@ -613,7 +615,7 @@ static int raw_recvmsg(struct kiocb *ioc /* Copy the address. */ if (sin) { sin->sin_family = AF_INET; - sin->sin_addr.s_addr = skb->nh.iph->saddr; + sin->sin_addr.s_addr = ip_hdr(skb)->saddr; sin->sin_port = 0; memset(&sin->sin_zero, 0, sizeof(sin->sin_zero)); } @@ -887,7 +889,7 @@ static int raw_seq_show(struct seq_file return 0; } -static struct seq_operations raw_seq_ops = { +static const struct seq_operations raw_seq_ops = { .start = raw_seq_start, .next = raw_seq_next, .stop = raw_seq_stop, diff -puN net/ipv4/route.c~git-net net/ipv4/route.c --- a/net/ipv4/route.c~git-net +++ a/net/ipv4/route.c @@ -82,7 +82,6 @@ #include #include #include -#include #include #include #include @@ -104,6 +103,7 @@ #include #include #include +#include #ifdef CONFIG_SYSCTL #include #endif @@ -364,7 +364,7 @@ static int rt_cache_seq_show(struct seq_ return 0; } -static struct seq_operations rt_cache_seq_ops = { +static const struct seq_operations rt_cache_seq_ops = { .start = rt_cache_seq_start, .next = rt_cache_seq_next, .stop = rt_cache_seq_stop, @@ -470,7 +470,7 @@ static int rt_cpu_seq_show(struct seq_fi return 0; } -static struct seq_operations rt_cpu_seq_ops = { +static const struct seq_operations rt_cpu_seq_ops = { .start = rt_cpu_seq_start, .next = rt_cpu_seq_next, .stop = rt_cpu_seq_stop, @@ -1519,7 +1519,7 @@ static void ipv4_link_failure(struct sk_ static int ip_rt_bug(struct sk_buff *skb) { printk(KERN_DEBUG "ip_rt_bug: %u.%u.%u.%u -> %u.%u.%u.%u, %s\n", - NIPQUAD(skb->nh.iph->saddr), NIPQUAD(skb->nh.iph->daddr), + NIPQUAD(ip_hdr(skb)->saddr), NIPQUAD(ip_hdr(skb)->daddr), skb->dev ? skb->dev->name : "?"); kfree_skb(skb); return 0; @@ -1698,9 +1698,9 @@ static void ip_handle_martian_source(str printk(KERN_WARNING "martian source %u.%u.%u.%u from " "%u.%u.%u.%u, on dev %s\n", NIPQUAD(daddr), NIPQUAD(saddr), dev->name); - if (dev->hard_header_len && skb->mac.raw) { + if (dev->hard_header_len && skb_mac_header_was_set(skb)) { int i; - unsigned char *p = skb->mac.raw; + const unsigned char *p = skb_mac_header(skb); printk(KERN_WARNING "ll header: "); for (i = 0; i < dev->hard_header_len; i++, p++) { printk("%02x", *p); @@ -2134,7 +2134,7 @@ int ip_route_input(struct sk_buff *skb, rcu_read_lock(); if ((in_dev = __in_dev_get_rcu(dev)) != NULL) { int our = ip_check_mc(in_dev, daddr, saddr, - skb->nh.iph->protocol); + ip_hdr(skb)->protocol); if (our #ifdef CONFIG_IP_MROUTE || (!LOCAL_MCAST(daddr) && IN_DEV_MFORWARD(in_dev)) @@ -2396,7 +2396,7 @@ static int ip_route_output_slow(struct r /* It is equivalent to inet_addr_type(saddr) == RTN_LOCAL */ dev_out = ip_dev_find(oldflp->fl4_src); - if (dev_out == NULL) + if ((dev_out == NULL) && !(sysctl_ip_nonlocal_bind)) goto out; /* I removed check for oif == dev_out->oif here. @@ -2407,7 +2407,7 @@ static int ip_route_output_slow(struct r of another iface. --ANK */ - if (oldflp->oif == 0 + if (dev_out && oldflp->oif == 0 && (MULTICAST(oldflp->fl4_dst) || oldflp->fl4_dst == htonl(0xFFFFFFFF))) { /* Special hack: user can direct multicasts and limited broadcast via necessary interface @@ -2683,7 +2683,7 @@ static int rt_fill_info(struct sk_buff * id = rt->peer->ip_id_count; if (rt->peer->tcp_ts_stamp) { ts = rt->peer->tcp_ts; - tsage = xtime.tv_sec - rt->peer->tcp_ts_stamp; + tsage = get_seconds() - rt->peer->tcp_ts_stamp; } } @@ -2721,7 +2721,7 @@ nla_put_failure: return -EMSGSIZE; } -int inet_rtm_getroute(struct sk_buff *in_skb, struct nlmsghdr* nlh, void *arg) +static int inet_rtm_getroute(struct sk_buff *in_skb, struct nlmsghdr* nlh, void *arg) { struct rtmsg *rtm; struct nlattr *tb[RTA_MAX+1]; @@ -2747,10 +2747,11 @@ int inet_rtm_getroute(struct sk_buff *in /* Reserve room for dummy headers, this skb can pass through good chunk of routing engine. */ - skb->mac.raw = skb->nh.raw = skb->data; + skb_reset_mac_header(skb); + skb_reset_network_header(skb); /* Bugfix: need to give ip_route_input enough of an IP header to not gag. */ - skb->nh.iph->protocol = IPPROTO_ICMP; + ip_hdr(skb)->protocol = IPPROTO_ICMP; skb_reserve(skb, MAX_HEADER + sizeof(struct iphdr)); src = tb[RTA_SRC] ? nla_get_be32(tb[RTA_SRC]) : 0; @@ -3193,6 +3194,8 @@ int __init ip_rt_init(void) xfrm_init(); xfrm4_init(); #endif + rtnl_register(PF_INET, RTM_GETROUTE, inet_rtm_getroute, NULL); + return rc; } diff -puN net/ipv4/syncookies.c~git-net net/ipv4/syncookies.c --- a/net/ipv4/syncookies.c~git-net +++ a/net/ipv4/syncookies.c @@ -125,10 +125,11 @@ static __u16 const msstab[] = { __u32 cookie_v4_init_sequence(struct sock *sk, struct sk_buff *skb, __u16 *mssp) { struct tcp_sock *tp = tcp_sk(sk); + const struct iphdr *iph = ip_hdr(skb); + const struct tcphdr *th = tcp_hdr(skb); int mssind; const __u16 mss = *mssp; - tp->last_synq_overflow = jiffies; /* XXX sort msstab[] by probability? Binary search? */ @@ -138,9 +139,8 @@ __u32 cookie_v4_init_sequence(struct soc NET_INC_STATS_BH(LINUX_MIB_SYNCOOKIESSENT); - return secure_tcp_syn_cookie(skb->nh.iph->saddr, skb->nh.iph->daddr, - skb->h.th->source, skb->h.th->dest, - ntohl(skb->h.th->seq), + return secure_tcp_syn_cookie(iph->saddr, iph->daddr, + th->source, th->dest, ntohl(th->seq), jiffies / (HZ * 60), mssind); } @@ -157,14 +157,13 @@ __u32 cookie_v4_init_sequence(struct soc */ static inline int cookie_check(struct sk_buff *skb, __u32 cookie) { - __u32 seq; - __u32 mssind; - - seq = ntohl(skb->h.th->seq)-1; - mssind = check_tcp_syn_cookie(cookie, - skb->nh.iph->saddr, skb->nh.iph->daddr, - skb->h.th->source, skb->h.th->dest, - seq, jiffies / (HZ * 60), COUNTER_TRIES); + const struct iphdr *iph = ip_hdr(skb); + const struct tcphdr *th = tcp_hdr(skb); + __u32 seq = ntohl(th->seq) - 1; + __u32 mssind = check_tcp_syn_cookie(cookie, iph->saddr, iph->daddr, + th->source, th->dest, seq, + jiffies / (HZ * 60), + COUNTER_TRIES); return mssind < NUM_MSS ? msstab[mssind] + 1 : 0; } @@ -191,14 +190,15 @@ struct sock *cookie_v4_check(struct sock struct inet_request_sock *ireq; struct tcp_request_sock *treq; struct tcp_sock *tp = tcp_sk(sk); - __u32 cookie = ntohl(skb->h.th->ack_seq) - 1; + const struct tcphdr *th = tcp_hdr(skb); + __u32 cookie = ntohl(th->ack_seq) - 1; struct sock *ret = sk; struct request_sock *req; int mss; struct rtable *rt; __u8 rcv_wscale; - if (!sysctl_tcp_syncookies || !skb->h.th->ack) + if (!sysctl_tcp_syncookies || !th->ack) goto out; if (time_after(jiffies, tp->last_synq_overflow + TCP_TIMEOUT_INIT) || @@ -220,12 +220,12 @@ struct sock *cookie_v4_check(struct sock } ireq = inet_rsk(req); treq = tcp_rsk(req); - treq->rcv_isn = ntohl(skb->h.th->seq) - 1; + treq->rcv_isn = ntohl(th->seq) - 1; treq->snt_isn = cookie; req->mss = mss; - ireq->rmt_port = skb->h.th->source; - ireq->loc_addr = skb->nh.iph->daddr; - ireq->rmt_addr = skb->nh.iph->saddr; + ireq->rmt_port = th->source; + ireq->loc_addr = ip_hdr(skb)->daddr; + ireq->rmt_addr = ip_hdr(skb)->saddr; ireq->opt = NULL; /* We throwed the options of the initial SYN away, so we hope @@ -261,8 +261,8 @@ struct sock *cookie_v4_check(struct sock .tos = RT_CONN_FLAGS(sk) } }, .proto = IPPROTO_TCP, .uli_u = { .ports = - { .sport = skb->h.th->dest, - .dport = skb->h.th->source } } }; + { .sport = th->dest, + .dport = th->source } } }; security_req_classify_flow(req, &fl); if (ip_route_output_key(&rt, &fl)) { reqsk_free(req); diff -puN net/ipv4/sysctl_net_ipv4.c~git-net net/ipv4/sysctl_net_ipv4.c --- a/net/ipv4/sysctl_net_ipv4.c~git-net +++ a/net/ipv4/sysctl_net_ipv4.c @@ -647,6 +647,14 @@ ctl_table ipv4_table[] = { .proc_handler = &proc_dointvec }, { + .ctl_name = NET_TCP_FRTO_RESPONSE, + .procname = "tcp_frto_response", + .data = &sysctl_tcp_frto_response, + .maxlen = sizeof(int), + .mode = 0644, + .proc_handler = &proc_dointvec + }, + { .ctl_name = NET_TCP_LOW_LATENCY, .procname = "tcp_low_latency", .data = &sysctl_tcp_low_latency, @@ -803,6 +811,14 @@ ctl_table ipv4_table[] = { .proc_handler = &proc_allowed_congestion_control, .strategy = &strategy_allowed_congestion_control, }, + { + .ctl_name = NET_TCP_MAX_SSTHRESH, + .procname = "tcp_max_ssthresh", + .data = &sysctl_tcp_max_ssthresh, + .maxlen = sizeof(int), + .mode = 0644, + .proc_handler = &proc_dointvec, + }, { .ctl_name = 0 } }; diff -puN net/ipv4/tcp.c~git-net net/ipv4/tcp.c --- a/net/ipv4/tcp.c~git-net +++ a/net/ipv4/tcp.c @@ -297,7 +297,7 @@ EXPORT_SYMBOL(tcp_sockets_allocated); * All the sk_stream_mem_schedule() is of this nature: accounting * is strict, actions are advisory and have some latency. */ -int tcp_memory_pressure; +int tcp_memory_pressure __read_mostly; EXPORT_SYMBOL(tcp_memory_pressure); @@ -425,7 +425,7 @@ int tcp_ioctl(struct sock *sk, int cmd, /* Subtract 1, if FIN is in queue. */ if (answ && !skb_queue_empty(&sk->sk_receive_queue)) answ -= - ((struct sk_buff *)sk->sk_receive_queue.prev)->h.th->fin; + tcp_hdr((struct sk_buff *)sk->sk_receive_queue.prev)->fin; } else answ = tp->urg_seq - tp->copied_seq; release_sock(sk); @@ -444,7 +444,7 @@ int tcp_ioctl(struct sock *sk, int cmd, break; default: return -ENOIOCTLCMD; - }; + } return put_user(answ, (int __user *)arg); } @@ -460,9 +460,9 @@ static inline int forced_push(struct tcp return after(tp->write_seq, tp->pushed_seq + (tp->max_window >> 1)); } -static inline void skb_entail(struct sock *sk, struct tcp_sock *tp, - struct sk_buff *skb) +static inline void skb_entail(struct sock *sk, struct sk_buff *skb) { + struct tcp_sock *tp = tcp_sk(sk); struct tcp_skb_cb *tcb = TCP_SKB_CB(skb); skb->csum = 0; @@ -470,10 +470,8 @@ static inline void skb_entail(struct soc tcb->flags = TCPCB_FLAG_ACK; tcb->sacked = 0; skb_header_release(skb); - __skb_queue_tail(&sk->sk_write_queue, skb); + tcp_add_write_queue_tail(sk, skb); sk_charge_skb(sk, skb); - if (!sk->sk_send_head) - sk->sk_send_head = skb; if (tp->nonagle & TCP_NAGLE_PUSH) tp->nonagle &= ~TCP_NAGLE_PUSH; } @@ -488,15 +486,17 @@ static inline void tcp_mark_urg(struct t } } -static inline void tcp_push(struct sock *sk, struct tcp_sock *tp, int flags, - int mss_now, int nonagle) +static inline void tcp_push(struct sock *sk, int flags, int mss_now, + int nonagle) { - if (sk->sk_send_head) { - struct sk_buff *skb = sk->sk_write_queue.prev; + struct tcp_sock *tp = tcp_sk(sk); + + if (tcp_send_head(sk)) { + struct sk_buff *skb = tcp_write_queue_tail(sk); if (!(flags & MSG_MORE) || forced_push(tp)) tcp_mark_push(tp, skb); tcp_mark_urg(tp, flags, skb); - __tcp_push_pending_frames(sk, tp, mss_now, + __tcp_push_pending_frames(sk, mss_now, (flags & MSG_MORE) ? TCP_NAGLE_CORK : nonagle); } } @@ -526,13 +526,13 @@ static ssize_t do_tcp_sendpages(struct s goto do_error; while (psize > 0) { - struct sk_buff *skb = sk->sk_write_queue.prev; + struct sk_buff *skb = tcp_write_queue_tail(sk); struct page *page = pages[poffset / PAGE_SIZE]; int copy, i, can_coalesce; int offset = poffset % PAGE_SIZE; int size = min_t(size_t, psize, PAGE_SIZE - offset); - if (!sk->sk_send_head || (copy = size_goal - skb->len) <= 0) { + if (!tcp_send_head(sk) || (copy = size_goal - skb->len) <= 0) { new_segment: if (!sk_stream_memory_free(sk)) goto wait_for_sndbuf; @@ -542,7 +542,7 @@ new_segment: if (!skb) goto wait_for_memory; - skb_entail(sk, tp, skb); + skb_entail(sk, skb); copy = size_goal; } @@ -588,8 +588,8 @@ new_segment: if (forced_push(tp)) { tcp_mark_push(tp, skb); - __tcp_push_pending_frames(sk, tp, mss_now, TCP_NAGLE_PUSH); - } else if (skb == sk->sk_send_head) + __tcp_push_pending_frames(sk, mss_now, TCP_NAGLE_PUSH); + } else if (skb == tcp_send_head(sk)) tcp_push_one(sk, mss_now); continue; @@ -597,7 +597,7 @@ wait_for_sndbuf: set_bit(SOCK_NOSPACE, &sk->sk_socket->flags); wait_for_memory: if (copied) - tcp_push(sk, tp, flags & ~MSG_MORE, mss_now, TCP_NAGLE_PUSH); + tcp_push(sk, flags & ~MSG_MORE, mss_now, TCP_NAGLE_PUSH); if ((err = sk_stream_wait_memory(sk, &timeo)) != 0) goto do_error; @@ -608,7 +608,7 @@ wait_for_memory: out: if (copied) - tcp_push(sk, tp, flags, mss_now, tp->nonagle); + tcp_push(sk, flags, mss_now, tp->nonagle); return copied; do_error: @@ -639,8 +639,9 @@ ssize_t tcp_sendpage(struct socket *sock #define TCP_PAGE(sk) (sk->sk_sndmsg_page) #define TCP_OFF(sk) (sk->sk_sndmsg_off) -static inline int select_size(struct sock *sk, struct tcp_sock *tp) +static inline int select_size(struct sock *sk) { + struct tcp_sock *tp = tcp_sk(sk); int tmp = tp->mss_cache; if (sk->sk_route_caps & NETIF_F_SG) { @@ -704,9 +705,9 @@ int tcp_sendmsg(struct kiocb *iocb, stru while (seglen > 0) { int copy; - skb = sk->sk_write_queue.prev; + skb = tcp_write_queue_tail(sk); - if (!sk->sk_send_head || + if (!tcp_send_head(sk) || (copy = size_goal - skb->len) <= 0) { new_segment: @@ -716,7 +717,7 @@ new_segment: if (!sk_stream_memory_free(sk)) goto wait_for_sndbuf; - skb = sk_stream_alloc_pskb(sk, select_size(sk, tp), + skb = sk_stream_alloc_pskb(sk, select_size(sk), 0, sk->sk_allocation); if (!skb) goto wait_for_memory; @@ -727,7 +728,7 @@ new_segment: if (sk->sk_route_caps & NETIF_F_ALL_CSUM) skb->ip_summed = CHECKSUM_PARTIAL; - skb_entail(sk, tp, skb); + skb_entail(sk, skb); copy = size_goal; } @@ -832,8 +833,8 @@ new_segment: if (forced_push(tp)) { tcp_mark_push(tp, skb); - __tcp_push_pending_frames(sk, tp, mss_now, TCP_NAGLE_PUSH); - } else if (skb == sk->sk_send_head) + __tcp_push_pending_frames(sk, mss_now, TCP_NAGLE_PUSH); + } else if (skb == tcp_send_head(sk)) tcp_push_one(sk, mss_now); continue; @@ -841,7 +842,7 @@ wait_for_sndbuf: set_bit(SOCK_NOSPACE, &sk->sk_socket->flags); wait_for_memory: if (copied) - tcp_push(sk, tp, flags & ~MSG_MORE, mss_now, TCP_NAGLE_PUSH); + tcp_push(sk, flags & ~MSG_MORE, mss_now, TCP_NAGLE_PUSH); if ((err = sk_stream_wait_memory(sk, &timeo)) != 0) goto do_error; @@ -853,16 +854,18 @@ wait_for_memory: out: if (copied) - tcp_push(sk, tp, flags, mss_now, tp->nonagle); + tcp_push(sk, flags, mss_now, tp->nonagle); TCP_CHECK_TIMER(sk); release_sock(sk); return copied; do_fault: if (!skb->len) { - if (sk->sk_send_head == skb) - sk->sk_send_head = NULL; - __skb_unlink(skb, &sk->sk_write_queue); + tcp_unlink_write_queue(skb, sk); + /* It is the one place in all of TCP, except connection + * reset, where we can be unlinking the send_head. + */ + tcp_check_send_head(sk, skb); sk_stream_free_skb(sk, skb); } @@ -1016,9 +1019,9 @@ static inline struct sk_buff *tcp_recv_s skb_queue_walk(&sk->sk_receive_queue, skb) { offset = seq - TCP_SKB_CB(skb)->seq; - if (skb->h.th->syn) + if (tcp_hdr(skb)->syn) offset--; - if (offset < skb->len || skb->h.th->fin) { + if (offset < skb->len || tcp_hdr(skb)->fin) { *off = offset; return skb; } @@ -1070,7 +1073,7 @@ int tcp_read_sock(struct sock *sk, read_ if (offset != skb->len) break; } - if (skb->h.th->fin) { + if (tcp_hdr(skb)->fin) { sk_eat_skb(sk, skb, 0); ++seq; break; @@ -1174,11 +1177,11 @@ int tcp_recvmsg(struct kiocb *iocb, stru break; } offset = *seq - TCP_SKB_CB(skb)->seq; - if (skb->h.th->syn) + if (tcp_hdr(skb)->syn) offset--; if (offset < skb->len) goto found_ok_skb; - if (skb->h.th->fin) + if (tcp_hdr(skb)->fin) goto found_fin_ok; BUG_TRAP(flags & MSG_PEEK); skb = skb->next; @@ -1389,12 +1392,12 @@ do_prequeue: skip_copy: if (tp->urg_data && after(tp->copied_seq, tp->urg_seq)) { tp->urg_data = 0; - tcp_fast_path_check(sk, tp); + tcp_fast_path_check(sk); } if (used + offset < skb->len) continue; - if (skb->h.th->fin) + if (tcp_hdr(skb)->fin) goto found_fin_ok; if (!(flags & MSG_PEEK)) { sk_eat_skb(sk, skb, copied_early); @@ -1563,7 +1566,7 @@ void tcp_close(struct sock *sk, long tim */ while ((skb = __skb_dequeue(&sk->sk_receive_queue)) != NULL) { u32 len = TCP_SKB_CB(skb)->end_seq - TCP_SKB_CB(skb)->seq - - skb->h.th->fin; + tcp_hdr(skb)->fin; data_was_unread += len; __kfree_skb(skb); } @@ -1732,7 +1735,7 @@ int tcp_disconnect(struct sock *sk, int tcp_clear_xmit_timers(sk); __skb_queue_purge(&sk->sk_receive_queue); - sk_stream_writequeue_purge(sk); + tcp_write_queue_purge(sk); __skb_queue_purge(&tp->out_of_order_queue); #ifdef CONFIG_NET_DMA __skb_queue_purge(&sk->sk_async_wait_queue); @@ -1758,7 +1761,7 @@ int tcp_disconnect(struct sock *sk, int tcp_set_ca_state(sk, TCP_CA_Open); tcp_clear_retrans(tp); inet_csk_delack_init(sk); - sk->sk_send_head = NULL; + tcp_init_send_head(sk); tp->rx_opt.saw_tstamp = 0; tcp_sack_reset(&tp->rx_opt); __sk_dst_reset(sk); @@ -1830,7 +1833,7 @@ static int do_tcp_setsockopt(struct sock * for currently queued segments. */ tp->nonagle |= TCP_NAGLE_OFF|TCP_NAGLE_PUSH; - tcp_push_pending_frames(sk, tp); + tcp_push_pending_frames(sk); } else { tp->nonagle &= ~TCP_NAGLE_OFF; } @@ -1854,7 +1857,7 @@ static int do_tcp_setsockopt(struct sock tp->nonagle &= ~TCP_NAGLE_CORK; if (tp->nonagle&TCP_NAGLE_OFF) tp->nonagle |= TCP_NAGLE_PUSH; - tcp_push_pending_frames(sk, tp); + tcp_push_pending_frames(sk); } break; @@ -1954,7 +1957,8 @@ static int do_tcp_setsockopt(struct sock default: err = -ENOPROTOOPT; break; - }; + } + release_sock(sk); return err; } @@ -2124,7 +2128,7 @@ static int do_tcp_getsockopt(struct sock return 0; default: return -ENOPROTOOPT; - }; + } if (put_user(len, optlen)) return -EFAULT; @@ -2170,7 +2174,7 @@ struct sk_buff *tcp_tso_segment(struct s if (!pskb_may_pull(skb, sizeof(*th))) goto out; - th = skb->h.th; + th = tcp_hdr(skb); thlen = th->doff * 4; if (thlen < sizeof(*th)) goto out; @@ -2210,7 +2214,7 @@ struct sk_buff *tcp_tso_segment(struct s delta = htonl(oldlen + (thlen + len)); skb = segs; - th = skb->h.th; + th = tcp_hdr(skb); seq = ntohl(th->seq); do { @@ -2219,23 +2223,25 @@ struct sk_buff *tcp_tso_segment(struct s th->check = ~csum_fold((__force __wsum)((__force u32)th->check + (__force u32)delta)); if (skb->ip_summed != CHECKSUM_PARTIAL) - th->check = csum_fold(csum_partial(skb->h.raw, thlen, - skb->csum)); + th->check = + csum_fold(csum_partial(skb_transport_header(skb), + thlen, skb->csum)); seq += len; skb = skb->next; - th = skb->h.th; + th = tcp_hdr(skb); th->seq = htonl(seq); th->cwr = 0; } while (skb->next); - delta = htonl(oldlen + (skb->tail - skb->h.raw) + skb->data_len); + delta = htonl(oldlen + (skb->tail - skb->transport_header) + + skb->data_len); th->check = ~csum_fold((__force __wsum)((__force u32)th->check + (__force u32)delta)); if (skb->ip_summed != CHECKSUM_PARTIAL) - th->check = csum_fold(csum_partial(skb->h.raw, thlen, - skb->csum)); + th->check = csum_fold(csum_partial(skb_transport_header(skb), + thlen, skb->csum)); out: return segs; @@ -2372,6 +2378,23 @@ void __tcp_put_md5sig_pool(void) EXPORT_SYMBOL(__tcp_put_md5sig_pool); #endif +void tcp_done(struct sock *sk) +{ + if(sk->sk_state == TCP_SYN_SENT || sk->sk_state == TCP_SYN_RECV) + TCP_INC_STATS_BH(TCP_MIB_ATTEMPTFAILS); + + tcp_set_state(sk, TCP_CLOSE); + tcp_clear_xmit_timers(sk); + + sk->sk_shutdown = SHUTDOWN_MASK; + + if (!sock_flag(sk, SOCK_DEAD)) + sk->sk_state_change(sk); + else + inet_csk_destroy_sock(sk); +} +EXPORT_SYMBOL_GPL(tcp_done); + extern void __skb_cb_too_small_for_tcp(int, int); extern struct tcp_congestion_ops tcp_reno; diff -puN net/ipv4/tcp_cong.c~git-net net/ipv4/tcp_cong.c --- a/net/ipv4/tcp_cong.c~git-net +++ a/net/ipv4/tcp_cong.c @@ -12,6 +12,8 @@ #include #include +int sysctl_tcp_max_ssthresh = 0; + static DEFINE_SPINLOCK(tcp_cong_list_lock); static LIST_HEAD(tcp_cong_list); @@ -271,10 +273,13 @@ int tcp_set_congestion_control(struct so /* - * Linear increase during slow start + * Slow start (exponential increase) with + * RFC3742 Limited Slow Start (fast linear increase) support. */ void tcp_slow_start(struct tcp_sock *tp) { + int cnt = 0; + if (sysctl_tcp_abc) { /* RFC3465: Slow Start * TCP sender SHOULD increase cwnd by the number of @@ -283,17 +288,25 @@ void tcp_slow_start(struct tcp_sock *tp) */ if (tp->bytes_acked < tp->mss_cache) return; - - /* We MAY increase by 2 if discovered delayed ack */ - if (sysctl_tcp_abc > 1 && tp->bytes_acked >= 2*tp->mss_cache) { - if (tp->snd_cwnd < tp->snd_cwnd_clamp) - tp->snd_cwnd++; - } } + + if (sysctl_tcp_max_ssthresh > 0 && + tp->snd_cwnd > sysctl_tcp_max_ssthresh) + cnt += sysctl_tcp_max_ssthresh>>1; + else + cnt += tp->snd_cwnd; + + /* RFC3465: We MAY increase by 2 if discovered delayed ack */ + if (sysctl_tcp_abc > 1 && tp->bytes_acked >= 2*tp->mss_cache) + cnt <<= 1; tp->bytes_acked = 0; - if (tp->snd_cwnd < tp->snd_cwnd_clamp) - tp->snd_cwnd++; + tp->snd_cwnd_cnt += cnt; + while (tp->snd_cwnd_cnt >= tp->snd_cwnd) { + tp->snd_cwnd_cnt -= tp->snd_cwnd; + if (tp->snd_cwnd < tp->snd_cwnd_clamp) + tp->snd_cwnd++; + } } EXPORT_SYMBOL_GPL(tcp_slow_start); diff -puN net/ipv4/tcp_cubic.c~git-net net/ipv4/tcp_cubic.c --- a/net/ipv4/tcp_cubic.c~git-net +++ a/net/ipv4/tcp_cubic.c @@ -1,5 +1,5 @@ /* - * TCP CUBIC: Binary Increase Congestion control for TCP v2.0 + * TCP CUBIC: Binary Increase Congestion control for TCP v2.1 * * This is from the implementation of CUBIC TCP in * Injong Rhee, Lisong Xu. @@ -51,8 +51,6 @@ MODULE_PARM_DESC(bic_scale, "scale (scal module_param(tcp_friendliness, int, 0644); MODULE_PARM_DESC(tcp_friendliness, "turn on/off tcp friendliness"); -#include - /* BIC TCP Parameters */ struct bictcp { u32 cnt; /* increase cwnd by 1 after ACKs */ @@ -93,50 +91,51 @@ static void bictcp_init(struct sock *sk) tcp_sk(sk)->snd_ssthresh = initial_ssthresh; } -/* 64bit divisor, dividend and result. dynamic precision */ -static inline u_int64_t div64_64(u_int64_t dividend, u_int64_t divisor) -{ - u_int32_t d = divisor; - - if (divisor > 0xffffffffULL) { - unsigned int shift = fls(divisor >> 32); - - d = divisor >> shift; - dividend >>= shift; - } - - /* avoid 64 bit division if possible */ - if (dividend >> 32) - do_div(dividend, d); - else - dividend = (uint32_t) dividend / d; - - return dividend; -} - -/* - * calculate the cubic root of x using Newton-Raphson +/* calculate the cubic root of x using a table lookup followed by one + * Newton-Raphson iteration. + * Avg err ~= 0.195% */ static u32 cubic_root(u64 a) { - u32 x, x1; - - /* Initial estimate is based on: - * cbrt(x) = exp(log(x) / 3) + u32 x, b, shift; + /* + * cbrt(x) MSB values for x MSB values in [0..63]. + * Precomputed then refined by hand - Willy Tarreau + * + * For x in [0..63], + * v = cbrt(x << 18) - 1 + * cbrt(x) = (v[x] + 10) >> 6 */ - x = 1u << (fls64(a)/3); + static const u8 v[] = { + /* 0x00 */ 0, 54, 54, 54, 118, 118, 118, 118, + /* 0x08 */ 123, 129, 134, 138, 143, 147, 151, 156, + /* 0x10 */ 157, 161, 164, 168, 170, 173, 176, 179, + /* 0x18 */ 181, 185, 187, 190, 192, 194, 197, 199, + /* 0x20 */ 200, 202, 204, 206, 209, 211, 213, 215, + /* 0x28 */ 217, 219, 221, 222, 224, 225, 227, 229, + /* 0x30 */ 231, 232, 234, 236, 237, 239, 240, 242, + /* 0x38 */ 244, 245, 246, 248, 250, 251, 252, 254, + }; + + b = fls64(a); + if (b < 7) { + /* a in [0..63] */ + return ((u32)v[(u32)a] + 35) >> 6; + } + + b = ((b * 84) >> 8) - 1; + shift = (a >> (b * 3)); + + x = ((u32)(((u32)v[shift] + 10) << b)) >> 6; /* - * Iteration based on: + * Newton-Raphson iteration * 2 * x = ( 2 * x + a / x ) / 3 * k+1 k k */ - do { - x1 = x; - x = (2 * x + (uint32_t) div64_64(a, x*x)) / 3; - } while (abs(x1 - x) > 1); - + x = (2 * x + (u32)div64_64(a, (u64)x * (u64)(x - 1))); + x = ((x * 341) >> 10); return x; } @@ -215,7 +214,9 @@ static inline void bictcp_update(struct if (ca->delay_min > 0) { /* max increment = Smax * rtt / 0.1 */ min_cnt = (cwnd * HZ * 8)/(10 * max_increment * ca->delay_min); - if (ca->cnt < min_cnt) + + /* use concave growth when the target is above the origin */ + if (ca->cnt < min_cnt && t >= ca->bic_K) ca->cnt = min_cnt; } @@ -401,4 +402,4 @@ module_exit(cubictcp_unregister); MODULE_AUTHOR("Sangtae Ha, Stephen Hemminger"); MODULE_LICENSE("GPL"); MODULE_DESCRIPTION("CUBIC TCP"); -MODULE_VERSION("2.0"); +MODULE_VERSION("2.1"); diff -puN net/ipv4/tcp_hybla.c~git-net net/ipv4/tcp_hybla.c --- a/net/ipv4/tcp_hybla.c~git-net +++ a/net/ipv4/tcp_hybla.c @@ -144,7 +144,7 @@ static void hybla_cong_avoid(struct sock ca->snd_cwnd_cents += odd; /* check when fractions goes >=128 and increase cwnd by 1. */ - while(ca->snd_cwnd_cents >= 128) { + while (ca->snd_cwnd_cents >= 128) { tp->snd_cwnd++; ca->snd_cwnd_cents -= 128; tp->snd_cwnd_cnt = 0; diff -puN /dev/null net/ipv4/tcp_illinois.c --- /dev/null +++ a/net/ipv4/tcp_illinois.c @@ -0,0 +1,284 @@ +/* + * TCP Illinois congestion control. + * Home page: + * http://www.ews.uiuc.edu/~shaoliu/tcpillinois/index.html + * + * The algorithm is described in: + * "TCP-Illinois: A Loss and Delay-Based Congestion Control Algorithm + * for High-Speed Networks" + * http://www.ews.uiuc.edu/~shaoliu/papersandslides/liubassri06perf.pdf + * + * Implemented from description in paper and ns-2 simulation. + * Copyright (C) 2007 Stephen Hemminger + */ + +#include +#include +#include +#include +#include + +#define ALPHA_SHIFT 7 +#define ALPHA_SCALE (1u<last_alpha = ALPHA_BASE; + ca->min_rtt = 0x7fffffff; +} + +/* + * Keep track of min, max and average RTT + */ +static void tcp_illinois_rtt_calc(struct sock *sk, u32 rtt) +{ + struct tcp_illinois *ca = inet_csk_ca(sk); + + if (rtt < ca->min_rtt) + ca->min_rtt = rtt; + if (rtt > ca->max_rtt) + ca->max_rtt = rtt; + + if (++ca->rtt_cnt == 1) + ca->sum_rtt = rtt; + else + ca->sum_rtt += rtt; +} + +/* max queuing delay */ +static inline u32 max_delay(const struct tcp_illinois *ca) +{ + return ca->max_rtt - ca->min_rtt; +} + +/* average queueing delay */ +static u32 avg_delay(struct tcp_illinois *ca) +{ + u64 avg_rtt = ca->sum_rtt; + + do_div(avg_rtt, ca->rtt_cnt); + + ca->sum_rtt = 0; + ca->rtt_cnt = 0; + + return avg_rtt - ca->min_rtt; +} + +/* + * Compute value of alpha used for additive increase. + * If small window then use 1.0, equivalent to Reno. + * + * For larger windows, adjust based on average delay. + * A. If average delay is at minimum (we are uncongested), + * then use large alpha (10.0) to increase faster. + * B. If average delay is at maximum (getting congested) + * then use small alpha (1.0) + * + * The result is a convex window growth curve. + */ +static u32 alpha(const struct sock *sk) +{ + struct tcp_sock *tp = tcp_sk(sk); + struct tcp_illinois *ca = inet_csk_ca(sk); + u32 dm = max_delay(ca); + u32 da = avg_delay(ca); + u32 d1, a; + + if (tp->snd_cwnd < win_thresh) + return ALPHA_BASE; /* same as Reno (1.0) */ + + d1 = dm / 100; + if (da <= d1) { + /* Don't let noise force agressive response */ + if (ca->rtt_low < THETA) { + ++ca->rtt_low; + return ca->last_alpha; + } else + return ALPHA_MAX; + } + + ca->rtt_low = 0; + + /* + * Based on: + * + * (dm - d1) amin amax + * k1 = ------------------- + * amax - amin + * + * (dm - d1) amin + * k2 = ---------------- - d1 + * amax - amin + * + * k1 + * alpha = ---------- + * k2 + da + */ + + dm -= d1; + da -= d1; + + a = (dm * ALPHA_MAX) / (dm - (da * (ALPHA_MAX - ALPHA_MIN)) / ALPHA_MIN); + ca->last_alpha = a; + return a; +} + +/* + * Increase window in response to successful acknowledgment. + */ +static void tcp_illinois_cong_avoid(struct sock *sk, u32 ack, u32 rtt, + u32 in_flight, int flag) +{ + struct tcp_sock *tp = tcp_sk(sk); + + /* RFC2861 only increase cwnd if fully utilized */ + if (!tcp_is_cwnd_limited(sk, in_flight)) + return; + + /* In slow start */ + if (tp->snd_cwnd <= tp->snd_ssthresh) + tcp_slow_start(tp); + + else { + /* additive increase cwnd += alpha / cwnd */ + if ((tp->snd_cwnd_cnt * alpha(sk)) >> ALPHA_SHIFT >= tp->snd_cwnd) { + if (tp->snd_cwnd < tp->snd_cwnd_clamp) + tp->snd_cwnd++; + tp->snd_cwnd_cnt = 0; + } else + tp->snd_cwnd_cnt++; + } +} + +/* + * Beta used for multiplicative decrease. + * For small window sizes returns same value as Reno (0.5) + * + * If delay is small (10% of max) then beta = 1/8 + * If delay is up to 80% of max then beta = 1/2 + * In between is a linear function + */ +static inline u32 beta(struct sock *sk) +{ + struct tcp_sock *tp = tcp_sk(sk); + struct tcp_illinois *ca = inet_csk_ca(sk); + u32 dm = max_delay(ca); + u32 da = avg_delay(ca); + u32 d2, d3; + + if (tp->snd_cwnd < win_thresh) + return BETA_BASE; + + d2 = dm / 10; + if (da <= d2) + return BETA_MIN; + d3 = (8 * dm) / 10; + if (da >= d3 || d3 <= d2) + return BETA_MAX; + + /* + * Based on: + * + * bmin d3 - bmax d2 + * k3 = ------------------- + * d3 - d2 + * + * bmax - bmin + * k4 = ------------- + * d3 - d2 + * + * b = k3 + k4 da + */ + return (BETA_MIN * d3 - BETA_MAX * d2 + (BETA_MAX - BETA_MIN) * da) + / (d3 - d2); +} + +static u32 tcp_illinois_ssthresh(struct sock *sk) +{ + struct tcp_sock *tp = tcp_sk(sk); + + /* Multiplicative decrease */ + return max((tp->snd_cwnd * beta(sk)) >> BETA_SHIFT, 2U); +} + +/* Extract info for TCP socket info provided via netlink. + * We aren't really doing Vegas, but we can provide RTT info + */ +static void tcp_illinois_get_info(struct sock *sk, u32 ext, + struct sk_buff *skb) +{ + const struct tcp_illinois *ca = inet_csk_ca(sk); + + if (ext & (1 << (INET_DIAG_VEGASINFO - 1))) { + struct tcpvegas_info info = { + .tcpv_enabled = 1, + .tcpv_rttcnt = ca->rtt_cnt, + .tcpv_minrtt = ca->min_rtt, + }; + u64 avg_rtt = ca->sum_rtt; + do_div(avg_rtt, ca->rtt_cnt); + info.tcpv_rtt = avg_rtt; + + nla_put(skb, INET_DIAG_VEGASINFO, sizeof(info), &info); + } +} + +static struct tcp_congestion_ops tcp_illinois = { + .init = tcp_illinois_init, + .ssthresh = tcp_illinois_ssthresh, + .min_cwnd = tcp_reno_min_cwnd, + .cong_avoid = tcp_illinois_cong_avoid, + .rtt_sample = tcp_illinois_rtt_calc, + .get_info = tcp_illinois_get_info, + + .owner = THIS_MODULE, + .name = "illinois", +}; + +static int __init tcp_illinois_register(void) +{ + BUILD_BUG_ON(sizeof(struct tcp_illinois) > ICSK_CA_PRIV_SIZE); + return tcp_register_congestion_control(&tcp_illinois); +} + +static void __exit tcp_illinois_unregister(void) +{ + tcp_unregister_congestion_control(&tcp_illinois); +} + +module_init(tcp_illinois_register); +module_exit(tcp_illinois_unregister); + +MODULE_AUTHOR("Stephen Hemminger, Shao Liu"); +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("TCP Illinois"); +MODULE_VERSION("0.3"); diff -puN net/ipv4/tcp_input.c~git-net net/ipv4/tcp_input.c --- a/net/ipv4/tcp_input.c~git-net +++ a/net/ipv4/tcp_input.c @@ -86,6 +86,7 @@ int sysctl_tcp_stdurg __read_mostly; int sysctl_tcp_rfc1337 __read_mostly; int sysctl_tcp_max_orphans __read_mostly = NR_FILE; int sysctl_tcp_frto __read_mostly; +int sysctl_tcp_frto_response __read_mostly; int sysctl_tcp_nometrics_save __read_mostly; int sysctl_tcp_moderate_rcvbuf __read_mostly = 1; @@ -100,6 +101,7 @@ int sysctl_tcp_abc __read_mostly; #define FLAG_ECE 0x40 /* ECE in this ACK */ #define FLAG_DATA_LOST 0x80 /* SACK detected data lossage. */ #define FLAG_SLOWPATH 0x100 /* Do not skip RFC checks for window update.*/ +#define FLAG_ONLY_ORIG_SACKED 0x200 /* SACKs only non-rexmit sent before RTO */ #define FLAG_ACKED (FLAG_DATA_ACKED|FLAG_SYN_ACKED) #define FLAG_NOT_DUP (FLAG_DATA|FLAG_WIN_UPDATE|FLAG_ACKED) @@ -110,6 +112,8 @@ int sysctl_tcp_abc __read_mostly; #define IsFack(tp) ((tp)->rx_opt.sack_ok & 2) #define IsDSack(tp) ((tp)->rx_opt.sack_ok & 4) +#define IsSackFrto() (sysctl_tcp_frto == 0x2) + #define TCP_REMNANT (TCP_FLAG_FIN|TCP_FLAG_URG|TCP_FLAG_SYN|TCP_FLAG_PSH) /* Adapt the MSS value used to make delayed ack decision to the @@ -136,7 +140,7 @@ static void tcp_measure_rcv_mss(struct s * * "len" is invariant segment length, including TCP header. */ - len += skb->data - skb->h.raw; + len += skb->data - skb_transport_header(skb); if (len >= TCP_MIN_RCVMSS + sizeof(struct tcphdr) || /* If PSH is not set, packet should be * full sized, provided peer TCP is not badly broken. @@ -144,7 +148,7 @@ static void tcp_measure_rcv_mss(struct s * to handle super-low mtu links fairly. */ (len >= TCP_MIN_MSS + sizeof(struct tcphdr) && - !(tcp_flag_word(skb->h.th)&TCP_REMNANT))) { + !(tcp_flag_word(tcp_hdr(skb)) & TCP_REMNANT))) { /* Subtract also invariant (if peer is RFC compliant), * tcp header plus fixed timestamp option length. * Resulting "len" is MSS free of SACK jitter. @@ -231,9 +235,9 @@ static void tcp_fixup_sndbuf(struct sock */ /* Slow part of check#2. */ -static int __tcp_grow_window(const struct sock *sk, struct tcp_sock *tp, - const struct sk_buff *skb) +static int __tcp_grow_window(const struct sock *sk, const struct sk_buff *skb) { + struct tcp_sock *tp = tcp_sk(sk); /* Optimize this! */ int truesize = tcp_win_from_space(skb->truesize)/2; int window = tcp_win_from_space(sysctl_tcp_rmem[2])/2; @@ -248,9 +252,11 @@ static int __tcp_grow_window(const struc return 0; } -static void tcp_grow_window(struct sock *sk, struct tcp_sock *tp, +static void tcp_grow_window(struct sock *sk, struct sk_buff *skb) { + struct tcp_sock *tp = tcp_sk(sk); + /* Check #1 */ if (tp->rcv_ssthresh < tp->window_clamp && (int)tp->rcv_ssthresh < tcp_space(sk) && @@ -263,7 +269,7 @@ static void tcp_grow_window(struct sock if (tcp_win_from_space(skb->truesize) <= skb->len) incr = 2*tp->advmss; else - incr = __tcp_grow_window(sk, tp, skb); + incr = __tcp_grow_window(sk, skb); if (incr) { tp->rcv_ssthresh = min(tp->rcv_ssthresh + incr, tp->window_clamp); @@ -326,8 +332,9 @@ static void tcp_init_buffer_space(struct } /* 5. Recalculate window clamp after socket hit its memory bounds. */ -static void tcp_clamp_window(struct sock *sk, struct tcp_sock *tp) +static void tcp_clamp_window(struct sock *sk) { + struct tcp_sock *tp = tcp_sk(sk); struct inet_connection_sock *icsk = inet_csk(sk); icsk->icsk_ack.quick = 0; @@ -499,8 +506,9 @@ new_measure: * each ACK we send, he increments snd_cwnd and transmits more of his * queue. -DaveM */ -static void tcp_event_data_recv(struct sock *sk, struct tcp_sock *tp, struct sk_buff *skb) +static void tcp_event_data_recv(struct sock *sk, struct sk_buff *skb) { + struct tcp_sock *tp = tcp_sk(sk); struct inet_connection_sock *icsk = inet_csk(sk); u32 now; @@ -541,7 +549,7 @@ static void tcp_event_data_recv(struct s TCP_ECN_check_ce(tp, skb); if (skb->len >= 128) - tcp_grow_window(sk, tp, skb); + tcp_grow_window(sk, skb); } /* Called to compute a smoothed rtt estimate. The data fed to this @@ -574,7 +582,7 @@ static void tcp_rtt_estimator(struct soc * does not matter how to _calculate_ it. Seems, it was trap * that VJ failed to avoid. 8) */ - if(m == 0) + if (m == 0) m = 1; if (tp->srtt != 0) { m -= (tp->srtt >> 3); /* m is now error in rtt est */ @@ -759,15 +767,17 @@ __u32 tcp_init_cwnd(struct tcp_sock *tp, } /* Set slow start threshold and cwnd not falling to slow start */ -void tcp_enter_cwr(struct sock *sk) +void tcp_enter_cwr(struct sock *sk, const int set_ssthresh) { struct tcp_sock *tp = tcp_sk(sk); + const struct inet_connection_sock *icsk = inet_csk(sk); tp->prior_ssthresh = 0; tp->bytes_acked = 0; - if (inet_csk(sk)->icsk_ca_state < TCP_CA_CWR) { + if (icsk->icsk_ca_state < TCP_CA_CWR) { tp->undo_marker = 0; - tp->snd_ssthresh = inet_csk(sk)->icsk_ca_ops->ssthresh(sk); + if (set_ssthresh) + tp->snd_ssthresh = icsk->icsk_ca_ops->ssthresh(sk); tp->snd_cwnd = min(tp->snd_cwnd, tcp_packets_in_flight(tp) + 1U); tp->snd_cwnd_cnt = 0; @@ -934,7 +944,8 @@ tcp_sacktag_write_queue(struct sock *sk, { const struct inet_connection_sock *icsk = inet_csk(sk); struct tcp_sock *tp = tcp_sk(sk); - unsigned char *ptr = ack_skb->h.raw + TCP_SKB_CB(ack_skb)->sacked; + unsigned char *ptr = (skb_transport_header(ack_skb) + + TCP_SKB_CB(ack_skb)->sacked); struct tcp_sack_block_wire *sp = (struct tcp_sack_block_wire *)(ptr+2); struct sk_buff *cached_skb; int num_sacks = (ptr[1] - TCPOLEN_SACK_BASE)>>3; @@ -1038,7 +1049,7 @@ tcp_sacktag_write_queue(struct sock *sk, cached_skb = tp->fastpath_skb_hint; cached_fack_count = tp->fastpath_cnt_hint; if (!cached_skb) { - cached_skb = sk->sk_write_queue.next; + cached_skb = tcp_write_queue_head(sk); cached_fack_count = 0; } @@ -1055,10 +1066,13 @@ tcp_sacktag_write_queue(struct sock *sk, if (after(end_seq, tp->high_seq)) flag |= FLAG_DATA_LOST; - sk_stream_for_retrans_queue_from(skb, sk) { + tcp_for_write_queue_from(skb, sk) { int in_sack, pcount; u8 sacked; + if (skb == tcp_send_head(sk)) + break; + cached_skb = skb; cached_fack_count = fack_count; if (i == first_sack_index) { @@ -1159,6 +1173,18 @@ tcp_sacktag_write_queue(struct sock *sk, /* clear lost hint */ tp->retransmit_skb_hint = NULL; } + /* SACK enhanced F-RTO detection. + * Set flag if and only if non-rexmitted + * segments below frto_highmark are + * SACKed (RFC4138; Appendix B). + * Clearing correct due to in-order walk + */ + if (after(end_seq, tp->frto_highmark)) { + flag &= ~FLAG_ONLY_ORIG_SACKED; + } else { + if (!(sacked & TCPCB_RETRANS)) + flag |= FLAG_ONLY_ORIG_SACKED; + } } TCP_SKB_CB(skb)->sacked |= TCPCB_SACKED_ACKED; @@ -1195,7 +1221,9 @@ tcp_sacktag_write_queue(struct sock *sk, if (lost_retrans && icsk->icsk_ca_state == TCP_CA_Recovery) { struct sk_buff *skb; - sk_stream_for_retrans_queue(skb, sk) { + tcp_for_write_queue(skb, sk) { + if (skb == tcp_send_head(sk)) + break; if (after(TCP_SKB_CB(skb)->seq, lost_retrans)) break; if (!after(TCP_SKB_CB(skb)->end_seq, tp->snd_una)) @@ -1224,7 +1252,8 @@ tcp_sacktag_write_queue(struct sock *sk, tp->left_out = tp->sacked_out + tp->lost_out; - if ((reord < tp->fackets_out) && icsk->icsk_ca_state != TCP_CA_Loss) + if ((reord < tp->fackets_out) && icsk->icsk_ca_state != TCP_CA_Loss && + (!tp->frto_highmark || after(tp->snd_una, tp->frto_highmark))) tcp_update_reordering(sk, ((tp->fackets_out + 1) - reord), 0); #if FASTRETRANS_DEBUG > 0 @@ -1236,9 +1265,54 @@ tcp_sacktag_write_queue(struct sock *sk, return flag; } -/* RTO occurred, but do not yet enter loss state. Instead, transmit two new - * segments to see from the next ACKs whether any data was really missing. - * If the RTO was spurious, new ACKs should arrive. +/* F-RTO can only be used if these conditions are satisfied: + * - there must be some unsent new data + * - the advertised window should allow sending it + * - TCP has never retransmitted anything other than head (SACK enhanced + * variant from Appendix B of RFC4138 is more robust here) + */ +int tcp_use_frto(struct sock *sk) +{ + const struct tcp_sock *tp = tcp_sk(sk); + struct sk_buff *skb; + + if (!sysctl_tcp_frto || !tcp_send_head(sk) || + after(TCP_SKB_CB(tcp_send_head(sk))->end_seq, + tp->snd_una + tp->snd_wnd)) + return 0; + + if (IsSackFrto()) + return 1; + + /* Avoid expensive walking of rexmit queue if possible */ + if (tp->retrans_out > 1) + return 0; + + skb = tcp_write_queue_head(sk); + skb = tcp_write_queue_next(sk, skb); /* Skips head */ + tcp_for_write_queue_from(skb, sk) { + if (skb == tcp_send_head(sk)) + break; + if (TCP_SKB_CB(skb)->sacked&TCPCB_RETRANS) + return 0; + /* Short-circuit when first non-SACKed skb has been checked */ + if (!(TCP_SKB_CB(skb)->sacked&TCPCB_SACKED_ACKED)) + break; + } + return 1; +} + +/* RTO occurred, but do not yet enter Loss state. Instead, defer RTO + * recovery a bit and use heuristics in tcp_process_frto() to detect if + * the RTO was spurious. Only clear SACKED_RETRANS of the head here to + * keep retrans_out counting accurate (with SACK F-RTO, other than head + * may still have that bit set); TCPCB_LOST and remaining SACKED_RETRANS + * bits are handled if the Loss state is really to be entered (in + * tcp_enter_frto_loss). + * + * Do like tcp_enter_loss() would; when RTO expires the second time it + * does: + * "Reduce ssthresh if it has not yet been made inside this window." */ void tcp_enter_frto(struct sock *sk) { @@ -1246,39 +1320,69 @@ void tcp_enter_frto(struct sock *sk) struct tcp_sock *tp = tcp_sk(sk); struct sk_buff *skb; - tp->frto_counter = 1; - - if (icsk->icsk_ca_state <= TCP_CA_Disorder || + if ((!tp->frto_counter && icsk->icsk_ca_state <= TCP_CA_Disorder) || tp->snd_una == tp->high_seq || - (icsk->icsk_ca_state == TCP_CA_Loss && !icsk->icsk_retransmits)) { + ((icsk->icsk_ca_state == TCP_CA_Loss || tp->frto_counter) && + !icsk->icsk_retransmits)) { tp->prior_ssthresh = tcp_current_ssthresh(sk); - tp->snd_ssthresh = icsk->icsk_ca_ops->ssthresh(sk); + /* Our state is too optimistic in ssthresh() call because cwnd + * is not reduced until tcp_enter_frto_loss() when previous FRTO + * recovery has not yet completed. Pattern would be this: RTO, + * Cumulative ACK, RTO (2xRTO for the same segment does not end + * up here twice). + * RFC4138 should be more specific on what to do, even though + * RTO is quite unlikely to occur after the first Cumulative ACK + * due to back-off and complexity of triggering events ... + */ + if (tp->frto_counter) { + u32 stored_cwnd; + stored_cwnd = tp->snd_cwnd; + tp->snd_cwnd = 2; + tp->snd_ssthresh = icsk->icsk_ca_ops->ssthresh(sk); + tp->snd_cwnd = stored_cwnd; + } else { + tp->snd_ssthresh = icsk->icsk_ca_ops->ssthresh(sk); + } + /* ... in theory, cong.control module could do "any tricks" in + * ssthresh(), which means that ca_state, lost bits and lost_out + * counter would have to be faked before the call occurs. We + * consider that too expensive, unlikely and hacky, so modules + * using these in ssthresh() must deal these incompatibility + * issues if they receives CA_EVENT_FRTO and frto_counter != 0 + */ tcp_ca_event(sk, CA_EVENT_FRTO); } - /* Have to clear retransmission markers here to keep the bookkeeping - * in shape, even though we are not yet in Loss state. - * If something was really lost, it is eventually caught up - * in tcp_enter_frto_loss. - */ - tp->retrans_out = 0; tp->undo_marker = tp->snd_una; tp->undo_retrans = 0; - sk_stream_for_retrans_queue(skb, sk) { - TCP_SKB_CB(skb)->sacked &= ~TCPCB_RETRANS; + skb = tcp_write_queue_head(sk); + if (TCP_SKB_CB(skb)->sacked & TCPCB_SACKED_RETRANS) { + TCP_SKB_CB(skb)->sacked &= ~TCPCB_SACKED_RETRANS; + tp->retrans_out -= tcp_skb_pcount(skb); } tcp_sync_left_out(tp); - tcp_set_ca_state(sk, TCP_CA_Open); - tp->frto_highmark = tp->snd_nxt; + /* Earlier loss recovery underway (see RFC4138; Appendix B). + * The last condition is necessary at least in tp->frto_counter case. + */ + if (IsSackFrto() && (tp->frto_counter || + ((1 << icsk->icsk_ca_state) & (TCPF_CA_Recovery|TCPF_CA_Loss))) && + after(tp->high_seq, tp->snd_una)) { + tp->frto_highmark = tp->high_seq; + } else { + tp->frto_highmark = tp->snd_nxt; + } + tcp_set_ca_state(sk, TCP_CA_Disorder); + tp->high_seq = tp->snd_nxt; + tp->frto_counter = 1; } /* Enter Loss state after F-RTO was applied. Dupack arrived after RTO, * which indicates that we should follow the traditional RTO recovery, * i.e. mark everything lost and do go-back-N retransmission. */ -static void tcp_enter_frto_loss(struct sock *sk) +static void tcp_enter_frto_loss(struct sock *sk, int allowed_segments, int flag) { struct tcp_sock *tp = tcp_sk(sk); struct sk_buff *skb; @@ -1287,10 +1391,23 @@ static void tcp_enter_frto_loss(struct s tp->sacked_out = 0; tp->lost_out = 0; tp->fackets_out = 0; + tp->retrans_out = 0; - sk_stream_for_retrans_queue(skb, sk) { + tcp_for_write_queue(skb, sk) { + if (skb == tcp_send_head(sk)) + break; cnt += tcp_skb_pcount(skb); - TCP_SKB_CB(skb)->sacked &= ~TCPCB_LOST; + /* + * Count the retransmission made on RTO correctly (only when + * waiting for the first ACK and did not get it)... + */ + if ((tp->frto_counter == 1) && !(flag&FLAG_DATA_ACKED)) { + tp->retrans_out += tcp_skb_pcount(skb); + /* ...enter this if branch just for the first segment */ + flag |= FLAG_DATA_ACKED; + } else { + TCP_SKB_CB(skb)->sacked &= ~(TCPCB_LOST|TCPCB_SACKED_RETRANS); + } if (!(TCP_SKB_CB(skb)->sacked&TCPCB_SACKED_ACKED)) { /* Do not mark those segments lost that were @@ -1308,7 +1425,7 @@ static void tcp_enter_frto_loss(struct s } tcp_sync_left_out(tp); - tp->snd_cwnd = tp->frto_counter + tcp_packets_in_flight(tp)+1; + tp->snd_cwnd = tcp_packets_in_flight(tp) + allowed_segments; tp->snd_cwnd_cnt = 0; tp->snd_cwnd_stamp = tcp_time_stamp; tp->undo_marker = 0; @@ -1366,7 +1483,9 @@ void tcp_enter_loss(struct sock *sk, int if (!how) tp->undo_marker = tp->snd_una; - sk_stream_for_retrans_queue(skb, sk) { + tcp_for_write_queue(skb, sk) { + if (skb == tcp_send_head(sk)) + break; cnt += tcp_skb_pcount(skb); if (TCP_SKB_CB(skb)->sacked&TCPCB_RETRANS) tp->undo_marker = 0; @@ -1401,14 +1520,14 @@ static int tcp_check_sack_reneging(struc * receiver _host_ is heavily congested (or buggy). * Do processing similar to RTO timeout. */ - if ((skb = skb_peek(&sk->sk_write_queue)) != NULL && + if ((skb = tcp_write_queue_head(sk)) != NULL && (TCP_SKB_CB(skb)->sacked & TCPCB_SACKED_ACKED)) { struct inet_connection_sock *icsk = inet_csk(sk); NET_INC_STATS_BH(LINUX_MIB_TCPSACKRENEGING); tcp_enter_loss(sk, 1); icsk->icsk_retransmits++; - tcp_retransmit_skb(sk, skb_peek(&sk->sk_write_queue)); + tcp_retransmit_skb(sk, tcp_write_queue_head(sk)); inet_csk_reset_xmit_timer(sk, ICSK_TIME_RETRANS, icsk->icsk_rto, TCP_RTO_MAX); return 1; @@ -1426,10 +1545,12 @@ static inline int tcp_skb_timedout(struc return (tcp_time_stamp - TCP_SKB_CB(skb)->when > inet_csk(sk)->icsk_rto); } -static inline int tcp_head_timedout(struct sock *sk, struct tcp_sock *tp) +static inline int tcp_head_timedout(struct sock *sk) { + struct tcp_sock *tp = tcp_sk(sk); + return tp->packets_out && - tcp_skb_timedout(sk, skb_peek(&sk->sk_write_queue)); + tcp_skb_timedout(sk, tcp_write_queue_head(sk)); } /* Linux NewReno/SACK/FACK/ECN state machine. @@ -1525,10 +1646,15 @@ static inline int tcp_head_timedout(stru * Main question: may we further continue forward transmission * with the same cwnd? */ -static int tcp_time_to_recover(struct sock *sk, struct tcp_sock *tp) +static int tcp_time_to_recover(struct sock *sk) { + struct tcp_sock *tp = tcp_sk(sk); __u32 packets_out; + /* Do not perform any recovery during FRTO algorithm */ + if (tp->frto_counter) + return 0; + /* Trick#1: The loss is proven. */ if (tp->lost_out) return 1; @@ -1540,7 +1666,7 @@ static int tcp_time_to_recover(struct so /* Trick#3 : when we use RFC2988 timer restart, fast * retransmit can be triggered by timeout of queue head. */ - if (tcp_head_timedout(sk, tp)) + if (tcp_head_timedout(sk)) return 1; /* Trick#4: It is still not OK... But will it be useful to delay @@ -1549,7 +1675,7 @@ static int tcp_time_to_recover(struct so packets_out = tp->packets_out; if (packets_out <= tp->reordering && tp->sacked_out >= max_t(__u32, packets_out/2, sysctl_tcp_reordering) && - !tcp_may_send_now(sk, tp)) { + !tcp_may_send_now(sk)) { /* We have nothing to send. This connection is limited * either by receiver window or by application. */ @@ -1589,8 +1715,10 @@ static void tcp_add_reno_sack(struct soc /* Account for ACK, ACKing some data in Reno Recovery phase. */ -static void tcp_remove_reno_sacks(struct sock *sk, struct tcp_sock *tp, int acked) +static void tcp_remove_reno_sacks(struct sock *sk, int acked) { + struct tcp_sock *tp = tcp_sk(sk); + if (acked > 0) { /* One ACK acked hole. The rest eat duplicate ACKs. */ if (acked-1 >= tp->sacked_out) @@ -1609,9 +1737,10 @@ static inline void tcp_reset_reno_sack(s } /* Mark head of queue up as lost. */ -static void tcp_mark_head_lost(struct sock *sk, struct tcp_sock *tp, +static void tcp_mark_head_lost(struct sock *sk, int packets, u32 high_seq) { + struct tcp_sock *tp = tcp_sk(sk); struct sk_buff *skb; int cnt; @@ -1620,11 +1749,13 @@ static void tcp_mark_head_lost(struct so skb = tp->lost_skb_hint; cnt = tp->lost_cnt_hint; } else { - skb = sk->sk_write_queue.next; + skb = tcp_write_queue_head(sk); cnt = 0; } - sk_stream_for_retrans_queue_from(skb, sk) { + tcp_for_write_queue_from(skb, sk) { + if (skb == tcp_send_head(sk)) + break; /* TODO: do this better */ /* this is not the most efficient way to do this... */ tp->lost_skb_hint = skb; @@ -1638,12 +1769,11 @@ static void tcp_mark_head_lost(struct so /* clear xmit_retransmit_queue hints * if this is beyond hint */ - if(tp->retransmit_skb_hint != NULL && - before(TCP_SKB_CB(skb)->seq, - TCP_SKB_CB(tp->retransmit_skb_hint)->seq)) { - + if (tp->retransmit_skb_hint != NULL && + before(TCP_SKB_CB(skb)->seq, + TCP_SKB_CB(tp->retransmit_skb_hint)->seq)) tp->retransmit_skb_hint = NULL; - } + } } tcp_sync_left_out(tp); @@ -1651,15 +1781,17 @@ static void tcp_mark_head_lost(struct so /* Account newly detected lost packet(s) */ -static void tcp_update_scoreboard(struct sock *sk, struct tcp_sock *tp) +static void tcp_update_scoreboard(struct sock *sk) { + struct tcp_sock *tp = tcp_sk(sk); + if (IsFack(tp)) { int lost = tp->fackets_out - tp->reordering; if (lost <= 0) lost = 1; - tcp_mark_head_lost(sk, tp, lost, tp->high_seq); + tcp_mark_head_lost(sk, lost, tp->high_seq); } else { - tcp_mark_head_lost(sk, tp, 1, tp->high_seq); + tcp_mark_head_lost(sk, 1, tp->high_seq); } /* New heuristics: it is possible only after we switched @@ -1667,13 +1799,15 @@ static void tcp_update_scoreboard(struct * Hence, we can detect timed out packets during fast * retransmit without falling to slow start. */ - if (!IsReno(tp) && tcp_head_timedout(sk, tp)) { + if (!IsReno(tp) && tcp_head_timedout(sk)) { struct sk_buff *skb; skb = tp->scoreboard_skb_hint ? tp->scoreboard_skb_hint - : sk->sk_write_queue.next; + : tcp_write_queue_head(sk); - sk_stream_for_retrans_queue_from(skb, sk) { + tcp_for_write_queue_from(skb, sk) { + if (skb == tcp_send_head(sk)) + break; if (!tcp_skb_timedout(sk, skb)) break; @@ -1745,9 +1879,11 @@ static inline int tcp_packet_delayed(str /* Undo procedures. */ #if FASTRETRANS_DEBUG > 1 -static void DBGUNDO(struct sock *sk, struct tcp_sock *tp, const char *msg) +static void DBGUNDO(struct sock *sk, const char *msg) { + struct tcp_sock *tp = tcp_sk(sk); struct inet_sock *inet = inet_sk(sk); + printk(KERN_DEBUG "Undo %s %u.%u.%u.%u/%u c%u l%u ss%u/%u p%u\n", msg, NIPQUAD(inet->daddr), ntohs(inet->dport), @@ -1793,13 +1929,15 @@ static inline int tcp_may_undo(struct tc } /* People celebrate: "We love our President!" */ -static int tcp_try_undo_recovery(struct sock *sk, struct tcp_sock *tp) +static int tcp_try_undo_recovery(struct sock *sk) { + struct tcp_sock *tp = tcp_sk(sk); + if (tcp_may_undo(tp)) { /* Happy end! We did not retransmit anything * or our original transmission succeeded. */ - DBGUNDO(sk, tp, inet_csk(sk)->icsk_ca_state == TCP_CA_Loss ? "loss" : "retrans"); + DBGUNDO(sk, inet_csk(sk)->icsk_ca_state == TCP_CA_Loss ? "loss" : "retrans"); tcp_undo_cwr(sk, 1); if (inet_csk(sk)->icsk_ca_state == TCP_CA_Loss) NET_INC_STATS_BH(LINUX_MIB_TCPLOSSUNDO); @@ -1819,10 +1957,12 @@ static int tcp_try_undo_recovery(struct } /* Try to undo cwnd reduction, because D-SACKs acked all retransmitted data */ -static void tcp_try_undo_dsack(struct sock *sk, struct tcp_sock *tp) +static void tcp_try_undo_dsack(struct sock *sk) { + struct tcp_sock *tp = tcp_sk(sk); + if (tp->undo_marker && !tp->undo_retrans) { - DBGUNDO(sk, tp, "D-SACK"); + DBGUNDO(sk, "D-SACK"); tcp_undo_cwr(sk, 1); tp->undo_marker = 0; NET_INC_STATS_BH(LINUX_MIB_TCPDSACKUNDO); @@ -1831,9 +1971,9 @@ static void tcp_try_undo_dsack(struct so /* Undo during fast recovery after partial ACK. */ -static int tcp_try_undo_partial(struct sock *sk, struct tcp_sock *tp, - int acked) +static int tcp_try_undo_partial(struct sock *sk, int acked) { + struct tcp_sock *tp = tcp_sk(sk); /* Partial ACK arrived. Force Hoe's retransmit. */ int failed = IsReno(tp) || tp->fackets_out>tp->reordering; @@ -1846,7 +1986,7 @@ static int tcp_try_undo_partial(struct s tcp_update_reordering(sk, tcp_fackets_out(tp) + acked, 1); - DBGUNDO(sk, tp, "Hoe"); + DBGUNDO(sk, "Hoe"); tcp_undo_cwr(sk, 0); NET_INC_STATS_BH(LINUX_MIB_TCPPARTIALUNDO); @@ -1860,17 +2000,21 @@ static int tcp_try_undo_partial(struct s } /* Undo during loss recovery after partial ACK. */ -static int tcp_try_undo_loss(struct sock *sk, struct tcp_sock *tp) +static int tcp_try_undo_loss(struct sock *sk) { + struct tcp_sock *tp = tcp_sk(sk); + if (tcp_may_undo(tp)) { struct sk_buff *skb; - sk_stream_for_retrans_queue(skb, sk) { + tcp_for_write_queue(skb, sk) { + if (skb == tcp_send_head(sk)) + break; TCP_SKB_CB(skb)->sacked &= ~TCPCB_LOST; } clear_all_retrans_hints(tp); - DBGUNDO(sk, tp, "partial loss"); + DBGUNDO(sk, "partial loss"); tp->lost_out = 0; tp->left_out = tp->sacked_out; tcp_undo_cwr(sk, 1); @@ -1892,15 +2036,17 @@ static inline void tcp_complete_cwr(stru tcp_ca_event(sk, CA_EVENT_COMPLETE_CWR); } -static void tcp_try_to_open(struct sock *sk, struct tcp_sock *tp, int flag) +static void tcp_try_to_open(struct sock *sk, int flag) { + struct tcp_sock *tp = tcp_sk(sk); + tp->left_out = tp->sacked_out; if (tp->retrans_out == 0) tp->retrans_stamp = 0; if (flag&FLAG_ECE) - tcp_enter_cwr(sk); + tcp_enter_cwr(sk, 1); if (inet_csk(sk)->icsk_ca_state != TCP_CA_CWR) { int state = TCP_CA_Open; @@ -1987,7 +2133,7 @@ tcp_fastretrans_alert(struct sock *sk, u before(tp->snd_una, tp->high_seq) && icsk->icsk_ca_state != TCP_CA_Open && tp->fackets_out > tp->reordering) { - tcp_mark_head_lost(sk, tp, tp->fackets_out-tp->reordering, tp->high_seq); + tcp_mark_head_lost(sk, tp->fackets_out-tp->reordering, tp->high_seq); NET_INC_STATS_BH(LINUX_MIB_TCPLOSS); } @@ -1997,14 +2143,13 @@ tcp_fastretrans_alert(struct sock *sk, u /* E. Check state exit conditions. State can be terminated * when high_seq is ACKed. */ if (icsk->icsk_ca_state == TCP_CA_Open) { - if (!sysctl_tcp_frto) - BUG_TRAP(tp->retrans_out == 0); + BUG_TRAP(tp->retrans_out == 0); tp->retrans_stamp = 0; } else if (!before(tp->snd_una, tp->high_seq)) { switch (icsk->icsk_ca_state) { case TCP_CA_Loss: icsk->icsk_retransmits = 0; - if (tcp_try_undo_recovery(sk, tp)) + if (tcp_try_undo_recovery(sk)) return; break; @@ -2018,7 +2163,7 @@ tcp_fastretrans_alert(struct sock *sk, u break; case TCP_CA_Disorder: - tcp_try_undo_dsack(sk, tp); + tcp_try_undo_dsack(sk); if (!tp->undo_marker || /* For SACK case do not Open to allow to undo * catching for all duplicate ACKs. */ @@ -2031,7 +2176,7 @@ tcp_fastretrans_alert(struct sock *sk, u case TCP_CA_Recovery: if (IsReno(tp)) tcp_reset_reno_sack(tp); - if (tcp_try_undo_recovery(sk, tp)) + if (tcp_try_undo_recovery(sk)) return; tcp_complete_cwr(sk); break; @@ -2047,14 +2192,14 @@ tcp_fastretrans_alert(struct sock *sk, u } else { int acked = prior_packets - tp->packets_out; if (IsReno(tp)) - tcp_remove_reno_sacks(sk, tp, acked); - is_dupack = tcp_try_undo_partial(sk, tp, acked); + tcp_remove_reno_sacks(sk, acked); + is_dupack = tcp_try_undo_partial(sk, acked); } break; case TCP_CA_Loss: if (flag&FLAG_DATA_ACKED) icsk->icsk_retransmits = 0; - if (!tcp_try_undo_loss(sk, tp)) { + if (!tcp_try_undo_loss(sk)) { tcp_moderate_cwnd(tp); tcp_xmit_retransmit_queue(sk); return; @@ -2071,10 +2216,10 @@ tcp_fastretrans_alert(struct sock *sk, u } if (icsk->icsk_ca_state == TCP_CA_Disorder) - tcp_try_undo_dsack(sk, tp); + tcp_try_undo_dsack(sk); - if (!tcp_time_to_recover(sk, tp)) { - tcp_try_to_open(sk, tp, flag); + if (!tcp_time_to_recover(sk)) { + tcp_try_to_open(sk, flag); return; } @@ -2113,8 +2258,8 @@ tcp_fastretrans_alert(struct sock *sk, u tcp_set_ca_state(sk, TCP_CA_Recovery); } - if (is_dupack || tcp_head_timedout(sk, tp)) - tcp_update_scoreboard(sk, tp); + if (is_dupack || tcp_head_timedout(sk)) + tcp_update_scoreboard(sk); tcp_cwnd_down(sk); tcp_xmit_retransmit_queue(sk); } @@ -2190,8 +2335,10 @@ static void tcp_cong_avoid(struct sock * * RFC2988 recommends to restart timer to now+rto. */ -static void tcp_ack_packets_out(struct sock *sk, struct tcp_sock *tp) +static void tcp_ack_packets_out(struct sock *sk) { + struct tcp_sock *tp = tcp_sk(sk); + if (!tp->packets_out) { inet_csk_clear_xmit_timer(sk, ICSK_TIME_RETRANS); } else { @@ -2277,8 +2424,8 @@ static int tcp_clean_rtx_queue(struct so = icsk->icsk_ca_ops->rtt_sample; struct timeval tv = { .tv_sec = 0, .tv_usec = 0 }; - while ((skb = skb_peek(&sk->sk_write_queue)) && - skb != sk->sk_send_head) { + while ((skb = tcp_write_queue_head(sk)) && + skb != tcp_send_head(sk)) { struct tcp_skb_cb *scb = TCP_SKB_CB(skb); __u8 sacked = scb->sacked; @@ -2318,7 +2465,7 @@ static int tcp_clean_rtx_queue(struct so if (sacked) { if (sacked & TCPCB_RETRANS) { - if(sacked & TCPCB_SACKED_RETRANS) + if (sacked & TCPCB_SACKED_RETRANS) tp->retrans_out -= tcp_skb_pcount(skb); acked |= FLAG_RETRANS_DATA_ACKED; seq_rtt = -1; @@ -2341,14 +2488,14 @@ static int tcp_clean_rtx_queue(struct so } tcp_dec_pcount_approx(&tp->fackets_out, skb); tcp_packets_out_dec(tp, skb); - __skb_unlink(skb, &sk->sk_write_queue); + tcp_unlink_write_queue(skb, sk); sk_stream_free_skb(sk, skb); clear_all_retrans_hints(tp); } if (acked&FLAG_ACKED) { tcp_ack_update_rtt(sk, acked, seq_rtt); - tcp_ack_packets_out(sk, tp); + tcp_ack_packets_out(sk); if (rtt_sample && !(acked & FLAG_RETRANS_DATA_ACKED)) (*rtt_sample)(sk, tcp_usrtt(&tv)); @@ -2390,7 +2537,7 @@ static void tcp_ack_probe(struct sock *s /* Was it a usable window open? */ - if (!after(TCP_SKB_CB(sk->sk_send_head)->end_seq, + if (!after(TCP_SKB_CB(tcp_send_head(sk))->end_seq, tp->snd_una + tp->snd_wnd)) { icsk->icsk_backoff = 0; inet_csk_clear_xmit_timer(sk, ICSK_TIME_PROBE0); @@ -2433,13 +2580,14 @@ static inline int tcp_may_update_window( * Window update algorithm, described in RFC793/RFC1122 (used in linux-2.2 * and in FreeBSD. NetBSD's one is even worse.) is wrong. */ -static int tcp_ack_update_window(struct sock *sk, struct tcp_sock *tp, - struct sk_buff *skb, u32 ack, u32 ack_seq) +static int tcp_ack_update_window(struct sock *sk, struct sk_buff *skb, u32 ack, + u32 ack_seq) { + struct tcp_sock *tp = tcp_sk(sk); int flag = 0; - u32 nwin = ntohs(skb->h.th->window); + u32 nwin = ntohs(tcp_hdr(skb)->window); - if (likely(!skb->h.th->syn)) + if (likely(!tcp_hdr(skb)->syn)) nwin <<= tp->rx_opt.snd_wscale; if (tcp_may_update_window(tp, ack, ack_seq, nwin)) { @@ -2453,7 +2601,7 @@ static int tcp_ack_update_window(struct * fast path is recovered for sending TCP. */ tp->pred_flags = 0; - tcp_fast_path_check(sk, tp); + tcp_fast_path_check(sk); if (nwin > tp->max_window) { tp->max_window = nwin; @@ -2467,39 +2615,128 @@ static int tcp_ack_update_window(struct return flag; } -static void tcp_process_frto(struct sock *sk, u32 prior_snd_una) +/* A very conservative spurious RTO response algorithm: reduce cwnd and + * continue in congestion avoidance. + */ +static void tcp_conservative_spur_to_response(struct tcp_sock *tp) +{ + tp->snd_cwnd = min(tp->snd_cwnd, tp->snd_ssthresh); + tp->snd_cwnd_cnt = 0; + tcp_moderate_cwnd(tp); +} + +/* A conservative spurious RTO response algorithm: reduce cwnd using + * rate halving and continue in congestion avoidance. + */ +static void tcp_ratehalving_spur_to_response(struct sock *sk) +{ + tcp_enter_cwr(sk, 0); +} + +static void tcp_undo_spur_to_response(struct sock *sk, int flag) +{ + if (flag&FLAG_ECE) + tcp_ratehalving_spur_to_response(sk); + else + tcp_undo_cwr(sk, 1); +} + +/* F-RTO spurious RTO detection algorithm (RFC4138) + * + * F-RTO affects during two new ACKs following RTO (well, almost, see inline + * comments). State (ACK number) is kept in frto_counter. When ACK advances + * window (but not to or beyond highest sequence sent before RTO): + * On First ACK, send two new segments out. + * On Second ACK, RTO was likely spurious. Do spurious response (response + * algorithm is not part of the F-RTO detection algorithm + * given in RFC4138 but can be selected separately). + * Otherwise (basically on duplicate ACK), RTO was (likely) caused by a loss + * and TCP falls back to conventional RTO recovery. + * + * Rationale: if the RTO was spurious, new ACKs should arrive from the + * original window even after we transmit two new data segments. + * + * SACK version: + * on first step, wait until first cumulative ACK arrives, then move to + * the second step. In second step, the next ACK decides. + * + * F-RTO is implemented (mainly) in four functions: + * - tcp_use_frto() is used to determine if TCP is can use F-RTO + * - tcp_enter_frto() prepares TCP state on RTO if F-RTO is used, it is + * called when tcp_use_frto() showed green light + * - tcp_process_frto() handles incoming ACKs during F-RTO algorithm + * - tcp_enter_frto_loss() is called if there is not enough evidence + * to prove that the RTO is indeed spurious. It transfers the control + * from F-RTO to the conventional RTO recovery + */ +static int tcp_process_frto(struct sock *sk, u32 prior_snd_una, int flag) { struct tcp_sock *tp = tcp_sk(sk); tcp_sync_left_out(tp); - if (tp->snd_una == prior_snd_una || - !before(tp->snd_una, tp->frto_highmark)) { - /* RTO was caused by loss, start retransmitting in - * go-back-N slow start - */ - tcp_enter_frto_loss(sk); - return; + /* Duplicate the behavior from Loss state (fastretrans_alert) */ + if (flag&FLAG_DATA_ACKED) + inet_csk(sk)->icsk_retransmits = 0; + + if (!before(tp->snd_una, tp->frto_highmark)) { + tcp_enter_frto_loss(sk, tp->frto_counter + 1, flag); + return 1; } - if (tp->frto_counter == 1) { - /* First ACK after RTO advances the window: allow two new - * segments out. + if (!IsSackFrto() || IsReno(tp)) { + /* RFC4138 shortcoming in step 2; should also have case c): + * ACK isn't duplicate nor advances window, e.g., opposite dir + * data, winupdate */ - tp->snd_cwnd = tcp_packets_in_flight(tp) + 2; + if ((tp->snd_una == prior_snd_una) && (flag&FLAG_NOT_DUP) && + !(flag&FLAG_FORWARD_PROGRESS)) + return 1; + + if (!(flag&FLAG_DATA_ACKED)) { + tcp_enter_frto_loss(sk, (tp->frto_counter == 1 ? 0 : 3), + flag); + return 1; + } } else { - /* Also the second ACK after RTO advances the window. - * The RTO was likely spurious. Reduce cwnd and continue - * in congestion avoidance - */ - tp->snd_cwnd = min(tp->snd_cwnd, tp->snd_ssthresh); - tcp_moderate_cwnd(tp); + if (!(flag&FLAG_DATA_ACKED) && (tp->frto_counter == 1)) { + /* Prevent sending of new data. */ + tp->snd_cwnd = min(tp->snd_cwnd, + tcp_packets_in_flight(tp)); + return 1; + } + + if ((tp->frto_counter == 2) && + (!(flag&FLAG_FORWARD_PROGRESS) || + ((flag&FLAG_DATA_SACKED) && !(flag&FLAG_ONLY_ORIG_SACKED)))) { + /* RFC4138 shortcoming (see comment above) */ + if (!(flag&FLAG_FORWARD_PROGRESS) && (flag&FLAG_NOT_DUP)) + return 1; + + tcp_enter_frto_loss(sk, 3, flag); + return 1; + } } - /* F-RTO affects on two new ACKs following RTO. - * At latest on third ACK the TCP behavior is back to normal. - */ - tp->frto_counter = (tp->frto_counter + 1) % 3; + if (tp->frto_counter == 1) { + tp->snd_cwnd = tcp_packets_in_flight(tp) + 2; + tp->frto_counter = 2; + return 1; + } else /* frto_counter == 2 */ { + switch (sysctl_tcp_frto_response) { + case 2: + tcp_undo_spur_to_response(sk, flag); + break; + case 1: + tcp_conservative_spur_to_response(tp); + break; + default: + tcp_ratehalving_spur_to_response(sk); + break; + } + tp->frto_counter = 0; + } + return 0; } /* This routine deals with incoming acks, but not outgoing ones. */ @@ -2513,6 +2750,7 @@ static int tcp_ack(struct sock *sk, stru u32 prior_in_flight; s32 seq_rtt; int prior_packets; + int frto_cwnd = 0; /* If the ack is newer than sent or older than previous acks * then we can probably ignore it. @@ -2549,12 +2787,12 @@ static int tcp_ack(struct sock *sk, stru else NET_INC_STATS_BH(LINUX_MIB_TCPPUREACKS); - flag |= tcp_ack_update_window(sk, tp, skb, ack, ack_seq); + flag |= tcp_ack_update_window(sk, skb, ack, ack_seq); if (TCP_SKB_CB(skb)->sacked) flag |= tcp_sacktag_write_queue(sk, skb, prior_snd_una); - if (TCP_ECN_rcv_ecn_echo(tp, skb->h.th)) + if (TCP_ECN_rcv_ecn_echo(tp, tcp_hdr(skb))) flag |= FLAG_ECE; tcp_ca_event(sk, CA_EVENT_SLOW_ACK); @@ -2575,15 +2813,16 @@ static int tcp_ack(struct sock *sk, stru flag |= tcp_clean_rtx_queue(sk, &seq_rtt); if (tp->frto_counter) - tcp_process_frto(sk, prior_snd_una); + frto_cwnd = tcp_process_frto(sk, prior_snd_una, flag); if (tcp_ack_is_dubious(sk, flag)) { /* Advance CWND, if state allows this. */ - if ((flag & FLAG_DATA_ACKED) && tcp_may_raise_cwnd(sk, flag)) + if ((flag & FLAG_DATA_ACKED) && !frto_cwnd && + tcp_may_raise_cwnd(sk, flag)) tcp_cong_avoid(sk, ack, seq_rtt, prior_in_flight, 0); tcp_fastretrans_alert(sk, prior_snd_una, prior_packets, flag); } else { - if ((flag & FLAG_DATA_ACKED)) + if ((flag & FLAG_DATA_ACKED) && !frto_cwnd) tcp_cong_avoid(sk, ack, seq_rtt, prior_in_flight, 1); } @@ -2599,7 +2838,7 @@ no_queue: * being used to time the probes, and is probably far higher than * it needs to be for normal retransmission. */ - if (sk->sk_send_head) + if (tcp_send_head(sk)) tcp_ack_probe(sk); return 1; @@ -2620,13 +2859,13 @@ uninteresting_ack: void tcp_parse_options(struct sk_buff *skb, struct tcp_options_received *opt_rx, int estab) { unsigned char *ptr; - struct tcphdr *th = skb->h.th; + struct tcphdr *th = tcp_hdr(skb); int length=(th->doff*4)-sizeof(struct tcphdr); ptr = (unsigned char *)(th + 1); opt_rx->saw_tstamp = 0; - while(length>0) { + while (length > 0) { int opcode=*ptr++; int opsize; @@ -2642,9 +2881,9 @@ void tcp_parse_options(struct sk_buff *s return; if (opsize > length) return; /* don't parse partial options */ - switch(opcode) { + switch (opcode) { case TCPOPT_MSS: - if(opsize==TCPOLEN_MSS && th->syn && !estab) { + if (opsize==TCPOLEN_MSS && th->syn && !estab) { u16 in_mss = ntohs(get_unaligned((__be16 *)ptr)); if (in_mss) { if (opt_rx->user_mss && opt_rx->user_mss < in_mss) @@ -2654,12 +2893,12 @@ void tcp_parse_options(struct sk_buff *s } break; case TCPOPT_WINDOW: - if(opsize==TCPOLEN_WINDOW && th->syn && !estab) + if (opsize==TCPOLEN_WINDOW && th->syn && !estab) if (sysctl_tcp_window_scaling) { __u8 snd_wscale = *(__u8 *) ptr; opt_rx->wscale_ok = 1; if (snd_wscale > 14) { - if(net_ratelimit()) + if (net_ratelimit()) printk(KERN_INFO "tcp_parse_options: Illegal window " "scaling value %d >14 received.\n", snd_wscale); @@ -2669,7 +2908,7 @@ void tcp_parse_options(struct sk_buff *s } break; case TCPOPT_TIMESTAMP: - if(opsize==TCPOLEN_TIMESTAMP) { + if (opsize==TCPOLEN_TIMESTAMP) { if ((estab && opt_rx->tstamp_ok) || (!estab && sysctl_tcp_timestamps)) { opt_rx->saw_tstamp = 1; @@ -2679,7 +2918,7 @@ void tcp_parse_options(struct sk_buff *s } break; case TCPOPT_SACK_PERM: - if(opsize==TCPOLEN_SACK_PERM && th->syn && !estab) { + if (opsize==TCPOLEN_SACK_PERM && th->syn && !estab) { if (sysctl_tcp_sack) { opt_rx->sack_ok = 1; tcp_sack_reset(opt_rx); @@ -2688,7 +2927,7 @@ void tcp_parse_options(struct sk_buff *s break; case TCPOPT_SACK: - if((opsize >= (TCPOLEN_SACK_BASE + TCPOLEN_SACK_PERBLOCK)) && + if ((opsize >= (TCPOLEN_SACK_BASE + TCPOLEN_SACK_PERBLOCK)) && !((opsize - TCPOLEN_SACK_BASE) % TCPOLEN_SACK_PERBLOCK) && opt_rx->sack_ok) { TCP_SKB_CB(skb)->sacked = (ptr - 2) - (unsigned char *)th; @@ -2701,10 +2940,11 @@ void tcp_parse_options(struct sk_buff *s */ break; #endif - }; + } + ptr+=opsize-2; length-=opsize; - }; + } } } @@ -2737,7 +2977,7 @@ static int tcp_fast_parse_options(struct static inline void tcp_store_ts_recent(struct tcp_sock *tp) { tp->rx_opt.ts_recent = tp->rx_opt.rcv_tsval; - tp->rx_opt.ts_recent_stamp = xtime.tv_sec; + tp->rx_opt.ts_recent_stamp = get_seconds(); } static inline void tcp_replace_ts_recent(struct tcp_sock *tp, u32 seq) @@ -2750,8 +2990,8 @@ static inline void tcp_replace_ts_recent * Not only, also it occurs for expired timestamps. */ - if((s32)(tp->rx_opt.rcv_tsval - tp->rx_opt.ts_recent) >= 0 || - xtime.tv_sec >= tp->rx_opt.ts_recent_stamp + TCP_PAWS_24DAYS) + if ((s32)(tp->rx_opt.rcv_tsval - tp->rx_opt.ts_recent) >= 0 || + get_seconds() >= tp->rx_opt.ts_recent_stamp + TCP_PAWS_24DAYS) tcp_store_ts_recent(tp); } } @@ -2782,7 +3022,7 @@ static inline void tcp_replace_ts_recent static int tcp_disordered_ack(const struct sock *sk, const struct sk_buff *skb) { struct tcp_sock *tp = tcp_sk(sk); - struct tcphdr *th = skb->h.th; + struct tcphdr *th = tcp_hdr(skb); u32 seq = TCP_SKB_CB(skb)->seq; u32 ack = TCP_SKB_CB(skb)->ack_seq; @@ -2803,7 +3043,7 @@ static inline int tcp_paws_discard(const { const struct tcp_sock *tp = tcp_sk(sk); return ((s32)(tp->rx_opt.ts_recent - tp->rx_opt.rcv_tsval) > TCP_PAWS_WINDOW && - xtime.tv_sec < tp->rx_opt.ts_recent_stamp + TCP_PAWS_24DAYS && + get_seconds() < tp->rx_opt.ts_recent_stamp + TCP_PAWS_24DAYS && !tcp_disordered_ack(sk, skb)); } @@ -2910,7 +3150,7 @@ static void tcp_fin(struct sk_buff *skb, printk(KERN_ERR "%s: Impossible, sk->sk_state=%d\n", __FUNCTION__, sk->sk_state); break; - }; + } /* It _is_ possible, that we have something out-of-order _after_ FIN. * Probably, we should reset in this case. For now drop them. @@ -3009,7 +3249,7 @@ static void tcp_sack_maybe_coalesce(stru */ tp->rx_opt.num_sacks--; tp->rx_opt.eff_sacks = min(tp->rx_opt.num_sacks + tp->rx_opt.dsack, 4 - tp->rx_opt.tstamp_ok); - for(i=this_sack; i < tp->rx_opt.num_sacks; i++) + for (i=this_sack; i < tp->rx_opt.num_sacks; i++) sp[i] = sp[i+1]; continue; } @@ -3062,7 +3302,7 @@ static void tcp_sack_new_ofo_skb(struct tp->rx_opt.num_sacks--; sp--; } - for(; this_sack > 0; this_sack--, sp--) + for (; this_sack > 0; this_sack--, sp--) *sp = *(sp-1); new_sack: @@ -3088,7 +3328,7 @@ static void tcp_sack_remove(struct tcp_s return; } - for(this_sack = 0; this_sack < num_sacks; ) { + for (this_sack = 0; this_sack < num_sacks; ) { /* Check if the start of the sack is covered by RCV.NXT. */ if (!before(tp->rcv_nxt, sp->start_seq)) { int i; @@ -3144,8 +3384,8 @@ static void tcp_ofo_queue(struct sock *s __skb_unlink(skb, &tp->out_of_order_queue); __skb_queue_tail(&sk->sk_receive_queue, skb); tp->rcv_nxt = TCP_SKB_CB(skb)->end_seq; - if(skb->h.th->fin) - tcp_fin(skb, sk, skb->h.th); + if (tcp_hdr(skb)->fin) + tcp_fin(skb, sk, tcp_hdr(skb)); } } @@ -3153,7 +3393,7 @@ static int tcp_prune_queue(struct sock * static void tcp_data_queue(struct sock *sk, struct sk_buff *skb) { - struct tcphdr *th = skb->h.th; + struct tcphdr *th = tcp_hdr(skb); struct tcp_sock *tp = tcp_sk(sk); int eaten = -1; @@ -3210,9 +3450,9 @@ queue_and_out: __skb_queue_tail(&sk->sk_receive_queue, skb); } tp->rcv_nxt = TCP_SKB_CB(skb)->end_seq; - if(skb->len) - tcp_event_data_recv(sk, tp, skb); - if(th->fin) + if (skb->len) + tcp_event_data_recv(sk, skb); + if (th->fin) tcp_fin(skb, sk, th); if (!skb_queue_empty(&tp->out_of_order_queue)) { @@ -3228,7 +3468,7 @@ queue_and_out: if (tp->rx_opt.num_sacks) tcp_sack_remove(tp); - tcp_fast_path_check(sk, tp); + tcp_fast_path_check(sk); if (eaten > 0) __kfree_skb(skb); @@ -3392,7 +3632,7 @@ tcp_collapse(struct sock *sk, struct sk_ * - bloated or contains data before "start" or * overlaps to the next one. */ - if (!skb->h.th->syn && !skb->h.th->fin && + if (!tcp_hdr(skb)->syn && !tcp_hdr(skb)->fin && (tcp_win_from_space(skb->truesize) > skb->len || before(TCP_SKB_CB(skb)->seq, start) || (skb->next != tail && @@ -3403,7 +3643,7 @@ tcp_collapse(struct sock *sk, struct sk_ start = TCP_SKB_CB(skb)->end_seq; skb = skb->next; } - if (skb == tail || skb->h.th->syn || skb->h.th->fin) + if (skb == tail || tcp_hdr(skb)->syn || tcp_hdr(skb)->fin) return; while (before(start, end)) { @@ -3419,11 +3659,14 @@ tcp_collapse(struct sock *sk, struct sk_ nskb = alloc_skb(copy+header, GFP_ATOMIC); if (!nskb) return; + + skb_set_mac_header(nskb, skb_mac_header(skb) - skb->head); + skb_set_network_header(nskb, (skb_network_header(skb) - + skb->head)); + skb_set_transport_header(nskb, (skb_transport_header(skb) - + skb->head)); skb_reserve(nskb, header); memcpy(nskb->head, skb->head, header); - nskb->nh.raw = nskb->head + (skb->nh.raw-skb->head); - nskb->h.raw = nskb->head + (skb->h.raw-skb->head); - nskb->mac.raw = nskb->head + (skb->mac.raw-skb->head); memcpy(nskb->cb, skb->cb, sizeof(skb->cb)); TCP_SKB_CB(nskb)->seq = TCP_SKB_CB(nskb)->end_seq = start; __skb_insert(nskb, skb->prev, skb, list); @@ -3449,7 +3692,9 @@ tcp_collapse(struct sock *sk, struct sk_ __kfree_skb(skb); NET_INC_STATS_BH(LINUX_MIB_TCPRCVCOLLAPSED); skb = next; - if (skb == tail || skb->h.th->syn || skb->h.th->fin) + if (skb == tail || + tcp_hdr(skb)->syn || + tcp_hdr(skb)->fin) return; } } @@ -3514,7 +3759,7 @@ static int tcp_prune_queue(struct sock * NET_INC_STATS_BH(LINUX_MIB_PRUNECALLED); if (atomic_read(&sk->sk_rmem_alloc) >= sk->sk_rcvbuf) - tcp_clamp_window(sk, tp); + tcp_clamp_window(sk); else if (tcp_memory_pressure) tp->rcv_ssthresh = min(tp->rcv_ssthresh, 4U * tp->advmss); @@ -3583,8 +3828,10 @@ void tcp_cwnd_application_limited(struct tp->snd_cwnd_stamp = tcp_time_stamp; } -static int tcp_should_expand_sndbuf(struct sock *sk, struct tcp_sock *tp) +static int tcp_should_expand_sndbuf(struct sock *sk) { + struct tcp_sock *tp = tcp_sk(sk); + /* If the user specified a specific send buffer setting, do * not modify it. */ @@ -3616,7 +3863,7 @@ static void tcp_new_space(struct sock *s { struct tcp_sock *tp = tcp_sk(sk); - if (tcp_should_expand_sndbuf(sk, tp)) { + if (tcp_should_expand_sndbuf(sk)) { int sndmem = max_t(u32, tp->rx_opt.mss_clamp, tp->mss_cache) + MAX_TCP_HEADER + 16 + sizeof(struct sk_buff), demanded = max_t(unsigned int, tp->snd_cwnd, @@ -3640,9 +3887,9 @@ static void tcp_check_space(struct sock } } -static inline void tcp_data_snd_check(struct sock *sk, struct tcp_sock *tp) +static inline void tcp_data_snd_check(struct sock *sk) { - tcp_push_pending_frames(sk, tp); + tcp_push_pending_frames(sk); tcp_check_space(sk); } @@ -3790,7 +4037,7 @@ static int tcp_copy_to_iovec(struct sock int err; local_bh_enable(); - if (skb->ip_summed==CHECKSUM_UNNECESSARY) + if (skb_csum_unnecessary(skb)) err = skb_copy_datagram_iovec(skb, hlen, tp->ucopy.iov, chunk); else err = skb_copy_and_csum_datagram_iovec(skb, hlen, @@ -3822,7 +4069,7 @@ static __sum16 __tcp_checksum_complete_u static inline int tcp_checksum_complete_user(struct sock *sk, struct sk_buff *skb) { - return skb->ip_summed != CHECKSUM_UNNECESSARY && + return !skb_csum_unnecessary(skb) && __tcp_checksum_complete_user(sk, skb); } @@ -3840,7 +4087,7 @@ static int tcp_dma_try_early_copy(struct if (!tp->ucopy.dma_chan && tp->ucopy.pinned_list) tp->ucopy.dma_chan = get_softnet_dma(); - if (tp->ucopy.dma_chan && skb->ip_summed == CHECKSUM_UNNECESSARY) { + if (tp->ucopy.dma_chan && skb_csum_unnecessary(skb)) { dma_cookie = dma_skb_copy_datagram_iovec(tp->ucopy.dma_chan, skb, hlen, tp->ucopy.iov, chunk, tp->ucopy.pinned_list); @@ -3856,7 +4103,7 @@ static int tcp_dma_try_early_copy(struct tcp_rcv_space_adjust(sk); if ((tp->ucopy.len == 0) || - (tcp_flag_word(skb->h.th) & TCP_FLAG_PSH) || + (tcp_flag_word(tcp_hdr(skb)) & TCP_FLAG_PSH) || (atomic_read(&sk->sk_rmem_alloc) > (sk->sk_rcvbuf >> 1))) { tp->ucopy.wakeup = 1; sk->sk_data_ready(sk, 0); @@ -3976,7 +4223,7 @@ int tcp_rcv_established(struct sock *sk, */ tcp_ack(sk, skb, 0); __kfree_skb(skb); - tcp_data_snd_check(sk, tp); + tcp_data_snd_check(sk); return 0; } else { /* Header too small */ TCP_INC_STATS_BH(TCP_MIB_INERRS); @@ -4047,12 +4294,12 @@ int tcp_rcv_established(struct sock *sk, tp->rcv_nxt = TCP_SKB_CB(skb)->end_seq; } - tcp_event_data_recv(sk, tp, skb); + tcp_event_data_recv(sk, skb); if (TCP_SKB_CB(skb)->ack_seq != tp->snd_una) { /* Well, only one small jumplet in fast path... */ tcp_ack(sk, skb, FLAG_DATA); - tcp_data_snd_check(sk, tp); + tcp_data_snd_check(sk); if (!inet_csk_ack_scheduled(sk)) goto no_ack; } @@ -4109,7 +4356,7 @@ slow_path: goto discard; } - if(th->rst) { + if (th->rst) { tcp_reset(sk); goto discard; } @@ -4124,7 +4371,7 @@ slow_path: } step5: - if(th->ack) + if (th->ack) tcp_ack(sk, skb, FLAG_SLOWPATH); tcp_rcv_rtt_measure_ts(sk, skb); @@ -4135,7 +4382,7 @@ step5: /* step 7: process the segment text */ tcp_data_queue(sk, skb); - tcp_data_snd_check(sk, tp); + tcp_data_snd_check(sk); tcp_ack_snd_check(sk); return 0; @@ -4412,13 +4659,13 @@ int tcp_rcv_state_process(struct sock *s goto discard; case TCP_LISTEN: - if(th->ack) + if (th->ack) return 1; - if(th->rst) + if (th->rst) goto discard; - if(th->syn) { + if (th->syn) { if (icsk->icsk_af_ops->conn_request(sk, skb) < 0) return 1; @@ -4452,7 +4699,7 @@ int tcp_rcv_state_process(struct sock *s /* Do step6 onward by hand. */ tcp_urg(sk, skb, th); __kfree_skb(skb); - tcp_data_snd_check(sk, tp); + tcp_data_snd_check(sk); return 0; } @@ -4474,7 +4721,7 @@ int tcp_rcv_state_process(struct sock *s } /* step 2: check RST bit */ - if(th->rst) { + if (th->rst) { tcp_reset(sk); goto discard; } @@ -4497,7 +4744,7 @@ int tcp_rcv_state_process(struct sock *s if (th->ack) { int acceptable = tcp_ack(sk, skb, FLAG_SLOWPATH); - switch(sk->sk_state) { + switch (sk->sk_state) { case TCP_SYN_RECV: if (acceptable) { tp->copied_seq = tp->rcv_nxt; @@ -4644,7 +4891,7 @@ int tcp_rcv_state_process(struct sock *s /* tcp_data could move socket to TIME-WAIT */ if (sk->sk_state != TCP_CLOSE) { - tcp_data_snd_check(sk, tp); + tcp_data_snd_check(sk); tcp_ack_snd_check(sk); } diff -puN net/ipv4/tcp_ipv4.c~git-net net/ipv4/tcp_ipv4.c --- a/net/ipv4/tcp_ipv4.c~git-net +++ a/net/ipv4/tcp_ipv4.c @@ -88,7 +88,7 @@ int sysctl_tcp_low_latency __read_mostly #define ICMP_MIN_LENGTH 8 /* Socket used for sending RSTs */ -static struct socket *tcp_socket; +static struct socket *tcp_socket __read_mostly; void tcp_v4_send_check(struct sock *sk, int len, struct sk_buff *skb); @@ -125,10 +125,10 @@ void tcp_unhash(struct sock *sk) static inline __u32 tcp_v4_init_sequence(struct sk_buff *skb) { - return secure_tcp_sequence_number(skb->nh.iph->daddr, - skb->nh.iph->saddr, - skb->h.th->dest, - skb->h.th->source); + return secure_tcp_sequence_number(ip_hdr(skb)->daddr, + ip_hdr(skb)->saddr, + tcp_hdr(skb)->dest, + tcp_hdr(skb)->source); } int tcp_twsk_unique(struct sock *sk, struct sock *sktw, void *twp) @@ -149,7 +149,7 @@ int tcp_twsk_unique(struct sock *sk, str */ if (tcptw->tw_ts_recent_stamp && (twp == NULL || (sysctl_tcp_tw_reuse && - xtime.tv_sec - tcptw->tw_ts_recent_stamp > 1))) { + get_seconds() - tcptw->tw_ts_recent_stamp > 1))) { tp->write_seq = tcptw->tw_snd_nxt + 65535 + 2; if (tp->write_seq == 0) tp->write_seq = 1; @@ -224,7 +224,7 @@ int tcp_v4_connect(struct sock *sk, stru * when trying new connection. */ if (peer != NULL && - peer->tcp_ts_stamp + TCP_PAWS_MSL >= xtime.tv_sec) { + peer->tcp_ts_stamp + TCP_PAWS_MSL >= get_seconds()) { tp->rx_opt.ts_recent_stamp = peer->tcp_ts_stamp; tp->rx_opt.ts_recent = peer->tcp_ts; } @@ -354,8 +354,8 @@ void tcp_v4_err(struct sk_buff *skb, u32 struct tcphdr *th = (struct tcphdr *)(skb->data + (iph->ihl << 2)); struct tcp_sock *tp; struct inet_sock *inet; - int type = skb->h.icmph->type; - int code = skb->h.icmph->code; + const int type = icmp_hdr(skb)->type; + const int code = icmp_hdr(skb)->code; struct sock *sk; __u32 seq; int err; @@ -499,11 +499,12 @@ out: void tcp_v4_send_check(struct sock *sk, int len, struct sk_buff *skb) { struct inet_sock *inet = inet_sk(sk); - struct tcphdr *th = skb->h.th; + struct tcphdr *th = tcp_hdr(skb); if (skb->ip_summed == CHECKSUM_PARTIAL) { th->check = ~tcp_v4_check(len, inet->saddr, inet->daddr, 0); + skb->csum_start = skb_transport_header(skb) - skb->head; skb->csum_offset = offsetof(struct tcphdr, check); } else { th->check = tcp_v4_check(len, inet->saddr, inet->daddr, @@ -515,17 +516,18 @@ void tcp_v4_send_check(struct sock *sk, int tcp_v4_gso_send_check(struct sk_buff *skb) { - struct iphdr *iph; + const struct iphdr *iph; struct tcphdr *th; if (!pskb_may_pull(skb, sizeof(*th))) return -EINVAL; - iph = skb->nh.iph; - th = skb->h.th; + iph = ip_hdr(skb); + th = tcp_hdr(skb); th->check = 0; th->check = ~tcp_v4_check(skb->len, iph->saddr, iph->daddr, 0); + skb->csum_start = skb_transport_header(skb) - skb->head; skb->csum_offset = offsetof(struct tcphdr, check); skb->ip_summed = CHECKSUM_PARTIAL; return 0; @@ -546,7 +548,7 @@ int tcp_v4_gso_send_check(struct sk_buff static void tcp_v4_send_reset(struct sock *sk, struct sk_buff *skb) { - struct tcphdr *th = skb->h.th; + struct tcphdr *th = tcp_hdr(skb); struct { struct tcphdr th; #ifdef CONFIG_TCP_MD5SIG @@ -585,7 +587,7 @@ static void tcp_v4_send_reset(struct soc arg.iov[0].iov_len = sizeof(rep.th); #ifdef CONFIG_TCP_MD5SIG - key = sk ? tcp_v4_md5_do_lookup(sk, skb->nh.iph->daddr) : NULL; + key = sk ? tcp_v4_md5_do_lookup(sk, ip_hdr(skb)->daddr) : NULL; if (key) { rep.opt[0] = htonl((TCPOPT_NOP << 24) | (TCPOPT_NOP << 16) | @@ -597,14 +599,14 @@ static void tcp_v4_send_reset(struct soc tcp_v4_do_calc_md5_hash((__u8 *)&rep.opt[1], key, - skb->nh.iph->daddr, - skb->nh.iph->saddr, + ip_hdr(skb)->daddr, + ip_hdr(skb)->saddr, &rep.th, IPPROTO_TCP, arg.iov[0].iov_len); } #endif - arg.csum = csum_tcpudp_nofold(skb->nh.iph->daddr, - skb->nh.iph->saddr, /* XXX */ + arg.csum = csum_tcpudp_nofold(ip_hdr(skb)->daddr, + ip_hdr(skb)->saddr, /* XXX */ sizeof(struct tcphdr), IPPROTO_TCP, 0); arg.csumoffset = offsetof(struct tcphdr, check) / 2; @@ -622,7 +624,7 @@ static void tcp_v4_send_ack(struct tcp_t struct sk_buff *skb, u32 seq, u32 ack, u32 win, u32 ts) { - struct tcphdr *th = skb->h.th; + struct tcphdr *th = tcp_hdr(skb); struct { struct tcphdr th; __be32 opt[(TCPOLEN_TSTAMP_ALIGNED >> 2) @@ -670,7 +672,7 @@ static void tcp_v4_send_ack(struct tcp_t * skb->sk) holds true, but we program defensively. */ if (!twsk && skb->sk) { - key = tcp_v4_md5_do_lookup(skb->sk, skb->nh.iph->daddr); + key = tcp_v4_md5_do_lookup(skb->sk, ip_hdr(skb)->daddr); } else if (twsk && twsk->tw_md5_keylen) { tw_key.key = twsk->tw_md5_key; tw_key.keylen = twsk->tw_md5_keylen; @@ -690,14 +692,14 @@ static void tcp_v4_send_ack(struct tcp_t tcp_v4_do_calc_md5_hash((__u8 *)&rep.opt[offset], key, - skb->nh.iph->daddr, - skb->nh.iph->saddr, + ip_hdr(skb)->daddr, + ip_hdr(skb)->saddr, &rep.th, IPPROTO_TCP, arg.iov[0].iov_len); } #endif - arg.csum = csum_tcpudp_nofold(skb->nh.iph->daddr, - skb->nh.iph->saddr, /* XXX */ + arg.csum = csum_tcpudp_nofold(ip_hdr(skb)->daddr, + ip_hdr(skb)->saddr, /* XXX */ arg.iov[0].iov_len, IPPROTO_TCP, 0); arg.csumoffset = offsetof(struct tcphdr, check) / 2; @@ -745,7 +747,7 @@ static int tcp_v4_send_synack(struct soc skb = tcp_make_synack(sk, dst, req); if (skb) { - struct tcphdr *th = skb->h.th; + struct tcphdr *th = tcp_hdr(skb); th->check = tcp_v4_check(skb->len, ireq->loc_addr, @@ -781,7 +783,7 @@ static void syn_flood_warning(struct sk_ warntime = jiffies; printk(KERN_INFO "possible SYN flooding on port %d. Sending cookies.\n", - ntohs(skb->h.th->dest)); + ntohs(tcp_hdr(skb)->dest)); } } #endif @@ -1133,8 +1135,8 @@ static int tcp_v4_inbound_md5_hash(struc */ __u8 *hash_location = NULL; struct tcp_md5sig_key *hash_expected; - struct iphdr *iph = skb->nh.iph; - struct tcphdr *th = skb->h.th; + const struct iphdr *iph = ip_hdr(skb); + struct tcphdr *th = tcp_hdr(skb); int length = (th->doff << 2) - sizeof(struct tcphdr); int genhash; unsigned char *ptr; @@ -1251,8 +1253,8 @@ int tcp_v4_conn_request(struct sock *sk, struct inet_request_sock *ireq; struct tcp_options_received tmp_opt; struct request_sock *req; - __be32 saddr = skb->nh.iph->saddr; - __be32 daddr = skb->nh.iph->daddr; + __be32 saddr = ip_hdr(skb)->saddr; + __be32 daddr = ip_hdr(skb)->daddr; __u32 isn = TCP_SKB_CB(skb)->when; struct dst_entry *dst = NULL; #ifdef CONFIG_SYN_COOKIES @@ -1327,7 +1329,7 @@ int tcp_v4_conn_request(struct sock *sk, ireq->rmt_addr = saddr; ireq->opt = tcp_v4_save_options(sk, skb); if (!want_cookie) - TCP_ECN_create_request(req, skb->h.th); + TCP_ECN_create_request(req, tcp_hdr(skb)); if (want_cookie) { #ifdef CONFIG_SYN_COOKIES @@ -1351,7 +1353,7 @@ int tcp_v4_conn_request(struct sock *sk, (dst = inet_csk_route_req(sk, req)) != NULL && (peer = rt_get_peer((struct rtable *)dst)) != NULL && peer->v4daddr == saddr) { - if (xtime.tv_sec < peer->tcp_ts_stamp + TCP_PAWS_MSL && + if (get_seconds() < peer->tcp_ts_stamp + TCP_PAWS_MSL && (s32)(peer->tcp_ts - req->ts_recent) > TCP_PAWS_WINDOW) { NET_INC_STATS_BH(LINUX_MIB_PAWSPASSIVEREJECTED); @@ -1375,7 +1377,7 @@ int tcp_v4_conn_request(struct sock *sk, LIMIT_NETDEBUG(KERN_DEBUG "TCP: drop open " "request from %u.%u.%u.%u/%u\n", NIPQUAD(saddr), - ntohs(skb->h.th->source)); + ntohs(tcp_hdr(skb)->source)); dst_release(dst); goto drop_and_free; } @@ -1439,7 +1441,7 @@ struct sock *tcp_v4_syn_recv_sock(struct newinet->opt = ireq->opt; ireq->opt = NULL; newinet->mc_index = inet_iif(skb); - newinet->mc_ttl = skb->nh.iph->ttl; + newinet->mc_ttl = ip_hdr(skb)->ttl; inet_csk(newsk)->icsk_ext_hdr_len = 0; if (newinet->opt) inet_csk(newsk)->icsk_ext_hdr_len = newinet->opt->optlen; @@ -1481,8 +1483,8 @@ exit: static struct sock *tcp_v4_hnd_req(struct sock *sk, struct sk_buff *skb) { - struct tcphdr *th = skb->h.th; - struct iphdr *iph = skb->nh.iph; + struct tcphdr *th = tcp_hdr(skb); + const struct iphdr *iph = ip_hdr(skb); struct sock *nsk; struct request_sock **prev; /* Find possible connection requests. */ @@ -1491,9 +1493,8 @@ static struct sock *tcp_v4_hnd_req(struc if (req) return tcp_check_req(sk, skb, req, prev); - nsk = inet_lookup_established(&tcp_hashinfo, skb->nh.iph->saddr, - th->source, skb->nh.iph->daddr, - th->dest, inet_iif(skb)); + nsk = inet_lookup_established(&tcp_hashinfo, iph->saddr, th->source, + iph->daddr, th->dest, inet_iif(skb)); if (nsk) { if (nsk->sk_state != TCP_TIME_WAIT) { @@ -1513,15 +1514,17 @@ static struct sock *tcp_v4_hnd_req(struc static __sum16 tcp_v4_checksum_init(struct sk_buff *skb) { + const struct iphdr *iph = ip_hdr(skb); + if (skb->ip_summed == CHECKSUM_COMPLETE) { - if (!tcp_v4_check(skb->len, skb->nh.iph->saddr, - skb->nh.iph->daddr, skb->csum)) { + if (!tcp_v4_check(skb->len, iph->saddr, + iph->daddr, skb->csum)) { skb->ip_summed = CHECKSUM_UNNECESSARY; return 0; } } - skb->csum = csum_tcpudp_nofold(skb->nh.iph->saddr, skb->nh.iph->daddr, + skb->csum = csum_tcpudp_nofold(iph->saddr, iph->daddr, skb->len, IPPROTO_TCP, 0); if (skb->len <= 76) { @@ -1555,7 +1558,7 @@ int tcp_v4_do_rcv(struct sock *sk, struc if (sk->sk_state == TCP_ESTABLISHED) { /* Fast path */ TCP_CHECK_TIMER(sk); - if (tcp_rcv_established(sk, skb, skb->h.th, skb->len)) { + if (tcp_rcv_established(sk, skb, tcp_hdr(skb), skb->len)) { rsk = sk; goto reset; } @@ -1563,7 +1566,7 @@ int tcp_v4_do_rcv(struct sock *sk, struc return 0; } - if (skb->len < (skb->h.th->doff << 2) || tcp_checksum_complete(skb)) + if (skb->len < tcp_hdrlen(skb) || tcp_checksum_complete(skb)) goto csum_err; if (sk->sk_state == TCP_LISTEN) { @@ -1581,7 +1584,7 @@ int tcp_v4_do_rcv(struct sock *sk, struc } TCP_CHECK_TIMER(sk); - if (tcp_rcv_state_process(sk, skb, skb->h.th, skb->len)) { + if (tcp_rcv_state_process(sk, skb, tcp_hdr(skb), skb->len)) { rsk = sk; goto reset; } @@ -1610,6 +1613,7 @@ csum_err: int tcp_v4_rcv(struct sk_buff *skb) { + const struct iphdr *iph; struct tcphdr *th; struct sock *sk; int ret; @@ -1623,7 +1627,7 @@ int tcp_v4_rcv(struct sk_buff *skb) if (!pskb_may_pull(skb, sizeof(struct tcphdr))) goto discard_it; - th = skb->h.th; + th = tcp_hdr(skb); if (th->doff < sizeof(struct tcphdr) / 4) goto bad_packet; @@ -1634,23 +1638,21 @@ int tcp_v4_rcv(struct sk_buff *skb) * Packet length and doff are validated by header prediction, * provided case of th->doff==0 is eliminated. * So, we defer the checks. */ - if ((skb->ip_summed != CHECKSUM_UNNECESSARY && - tcp_v4_checksum_init(skb))) + if (!skb_csum_unnecessary(skb) && tcp_v4_checksum_init(skb)) goto bad_packet; - th = skb->h.th; + th = tcp_hdr(skb); + iph = ip_hdr(skb); TCP_SKB_CB(skb)->seq = ntohl(th->seq); TCP_SKB_CB(skb)->end_seq = (TCP_SKB_CB(skb)->seq + th->syn + th->fin + skb->len - th->doff * 4); TCP_SKB_CB(skb)->ack_seq = ntohl(th->ack_seq); TCP_SKB_CB(skb)->when = 0; - TCP_SKB_CB(skb)->flags = skb->nh.iph->tos; + TCP_SKB_CB(skb)->flags = iph->tos; TCP_SKB_CB(skb)->sacked = 0; - sk = __inet_lookup(&tcp_hashinfo, skb->nh.iph->saddr, th->source, - skb->nh.iph->daddr, th->dest, - inet_iif(skb)); - + sk = __inet_lookup(&tcp_hashinfo, iph->saddr, th->source, + iph->daddr, th->dest, inet_iif(skb)); if (!sk) goto no_tcp_socket; @@ -1724,8 +1726,7 @@ do_time_wait: switch (tcp_timewait_state_process(inet_twsk(sk), skb, th)) { case TCP_TW_SYN: { struct sock *sk2 = inet_lookup_listener(&tcp_hashinfo, - skb->nh.iph->daddr, - th->dest, + iph->daddr, th->dest, inet_iif(skb)); if (sk2) { inet_twsk_deschedule(inet_twsk(sk), &tcp_death_row); @@ -1770,7 +1771,7 @@ int tcp_v4_remember_stamp(struct sock *s if (peer) { if ((s32)(peer->tcp_ts - tp->rx_opt.ts_recent) <= 0 || - (peer->tcp_ts_stamp + TCP_PAWS_MSL < xtime.tv_sec && + (peer->tcp_ts_stamp + TCP_PAWS_MSL < get_seconds() && peer->tcp_ts_stamp <= tp->rx_opt.ts_recent_stamp)) { peer->tcp_ts_stamp = tp->rx_opt.ts_recent_stamp; peer->tcp_ts = tp->rx_opt.ts_recent; @@ -1791,7 +1792,7 @@ int tcp_v4_tw_remember_stamp(struct inet const struct tcp_timewait_sock *tcptw = tcp_twsk((struct sock *)tw); if ((s32)(peer->tcp_ts - tcptw->tw_ts_recent) <= 0 || - (peer->tcp_ts_stamp + TCP_PAWS_MSL < xtime.tv_sec && + (peer->tcp_ts_stamp + TCP_PAWS_MSL < get_seconds() && peer->tcp_ts_stamp <= tcptw->tw_ts_recent_stamp)) { peer->tcp_ts_stamp = tcptw->tw_ts_recent_stamp; peer->tcp_ts = tcptw->tw_ts_recent; @@ -1890,7 +1891,7 @@ int tcp_v4_destroy_sock(struct sock *sk) tcp_cleanup_congestion_control(sk); /* Cleanup up the write buffer. */ - sk_stream_writequeue_purge(sk); + tcp_write_queue_purge(sk); /* Cleans up our, hopefully empty, out_of_order_queue. */ __skb_queue_purge(&tp->out_of_order_queue); @@ -2293,13 +2294,13 @@ static void get_openreq4(struct sock *sk req); } -static void get_tcp4_sock(struct sock *sp, char *tmpbuf, int i) +static void get_tcp4_sock(struct sock *sk, char *tmpbuf, int i) { int timer_active; unsigned long timer_expires; - struct tcp_sock *tp = tcp_sk(sp); - const struct inet_connection_sock *icsk = inet_csk(sp); - struct inet_sock *inet = inet_sk(sp); + struct tcp_sock *tp = tcp_sk(sk); + const struct inet_connection_sock *icsk = inet_csk(sk); + struct inet_sock *inet = inet_sk(sk); __be32 dest = inet->daddr; __be32 src = inet->rcv_saddr; __u16 destp = ntohs(inet->dport); @@ -2311,9 +2312,9 @@ static void get_tcp4_sock(struct sock *s } else if (icsk->icsk_pending == ICSK_TIME_PROBE0) { timer_active = 4; timer_expires = icsk->icsk_timeout; - } else if (timer_pending(&sp->sk_timer)) { + } else if (timer_pending(&sk->sk_timer)) { timer_active = 2; - timer_expires = sp->sk_timer.expires; + timer_expires = sk->sk_timer.expires; } else { timer_active = 0; timer_expires = jiffies; @@ -2321,17 +2322,17 @@ static void get_tcp4_sock(struct sock *s sprintf(tmpbuf, "%4d: %08X:%04X %08X:%04X %02X %08X:%08X %02X:%08lX " "%08X %5d %8d %lu %d %p %u %u %u %u %d", - i, src, srcp, dest, destp, sp->sk_state, + i, src, srcp, dest, destp, sk->sk_state, tp->write_seq - tp->snd_una, - sp->sk_state == TCP_LISTEN ? sp->sk_ack_backlog : + sk->sk_state == TCP_LISTEN ? sk->sk_ack_backlog : (tp->rcv_nxt - tp->copied_seq), timer_active, jiffies_to_clock_t(timer_expires - jiffies), icsk->icsk_retransmits, - sock_i_uid(sp), + sock_i_uid(sk), icsk->icsk_probes_out, - sock_i_ino(sp), - atomic_read(&sp->sk_refcnt), sp, + sock_i_ino(sk), + atomic_read(&sk->sk_refcnt), sk, icsk->icsk_rto, icsk->icsk_ack.ato, (icsk->icsk_ack.quick << 1) | icsk->icsk_ack.pingpong, diff -puN net/ipv4/tcp_minisocks.c~git-net net/ipv4/tcp_minisocks.c --- a/net/ipv4/tcp_minisocks.c~git-net +++ a/net/ipv4/tcp_minisocks.c @@ -149,7 +149,7 @@ kill_with_rst: tw->tw_substate = TCP_TIME_WAIT; tcptw->tw_rcv_nxt = TCP_SKB_CB(skb)->end_seq; if (tmp_opt.saw_tstamp) { - tcptw->tw_ts_recent_stamp = xtime.tv_sec; + tcptw->tw_ts_recent_stamp = get_seconds(); tcptw->tw_ts_recent = tmp_opt.rcv_tsval; } @@ -208,7 +208,7 @@ kill: if (tmp_opt.saw_tstamp) { tcptw->tw_ts_recent = tmp_opt.rcv_tsval; - tcptw->tw_ts_recent_stamp = xtime.tv_sec; + tcptw->tw_ts_recent_stamp = get_seconds(); } inet_twsk_put(tw); @@ -246,7 +246,7 @@ kill: if (paws_reject) NET_INC_STATS_BH(LINUX_MIB_PAWSESTABREJECTED); - if(!th->rst) { + if (!th->rst) { /* In this case we must reset the TIMEWAIT timer. * * If it is ACKless SYN it may be both old duplicate @@ -324,7 +324,7 @@ void tcp_time_wait(struct sock *sk, int if (tcp_alloc_md5sig_pool() == NULL) BUG(); } - } while(0); + } while (0); #endif /* Linkage updates. */ @@ -387,8 +387,8 @@ struct sock *tcp_create_openreq_child(st /* Now setup tcp_sock */ newtp = tcp_sk(newsk); newtp->pred_flags = 0; - newtp->rcv_nxt = treq->rcv_isn + 1; - newtp->snd_nxt = newtp->snd_una = newtp->snd_sml = treq->snt_isn + 1; + newtp->rcv_wup = newtp->copied_seq = newtp->rcv_nxt = treq->rcv_isn + 1; + newtp->snd_sml = newtp->snd_una = newtp->snd_nxt = treq->snt_isn + 1; tcp_prequeue_init(newtp); @@ -422,10 +422,8 @@ struct sock *tcp_create_openreq_child(st tcp_set_ca_state(newsk, TCP_CA_Open); tcp_init_xmit_timers(newsk); skb_queue_head_init(&newtp->out_of_order_queue); - newtp->rcv_wup = treq->rcv_isn + 1; newtp->write_seq = treq->snt_isn + 1; newtp->pushed_seq = newtp->write_seq; - newtp->copied_seq = treq->rcv_isn + 1; newtp->rx_opt.saw_tstamp = 0; @@ -440,7 +438,7 @@ struct sock *tcp_create_openreq_child(st keepalive_time_when(newtp)); newtp->rx_opt.tstamp_ok = ireq->tstamp_ok; - if((newtp->rx_opt.sack_ok = ireq->sack_ok) != 0) { + if ((newtp->rx_opt.sack_ok = ireq->sack_ok) != 0) { if (sysctl_tcp_fack) newtp->rx_opt.sack_ok |= 2; } @@ -455,12 +453,13 @@ struct sock *tcp_create_openreq_child(st newtp->rx_opt.snd_wscale = newtp->rx_opt.rcv_wscale = 0; newtp->window_clamp = min(newtp->window_clamp, 65535U); } - newtp->snd_wnd = ntohs(skb->h.th->window) << newtp->rx_opt.snd_wscale; + newtp->snd_wnd = (ntohs(tcp_hdr(skb)->window) << + newtp->rx_opt.snd_wscale); newtp->max_window = newtp->snd_wnd; if (newtp->rx_opt.tstamp_ok) { newtp->rx_opt.ts_recent = req->ts_recent; - newtp->rx_opt.ts_recent_stamp = xtime.tv_sec; + newtp->rx_opt.ts_recent_stamp = get_seconds(); newtp->tcp_header_len = sizeof(struct tcphdr) + TCPOLEN_TSTAMP_ALIGNED; } else { newtp->rx_opt.ts_recent_stamp = 0; @@ -490,7 +489,7 @@ struct sock *tcp_check_req(struct sock * struct request_sock *req, struct request_sock **prev) { - struct tcphdr *th = skb->h.th; + const struct tcphdr *th = tcp_hdr(skb); __be32 flg = tcp_flag_word(th) & (TCP_FLAG_RST|TCP_FLAG_SYN|TCP_FLAG_ACK); int paws_reject = 0; struct tcp_options_received tmp_opt; @@ -506,7 +505,7 @@ struct sock *tcp_check_req(struct sock * * it can be estimated (approximately) * from another data. */ - tmp_opt.ts_recent_stamp = xtime.tv_sec - ((TCP_TIMEOUT_INIT/HZ)<retrans); + tmp_opt.ts_recent_stamp = get_seconds() - ((TCP_TIMEOUT_INIT/HZ)<retrans); paws_reject = tcp_paws_check(&tmp_opt, th->rst); } } @@ -712,8 +711,8 @@ int tcp_child_process(struct sock *paren int state = child->sk_state; if (!sock_owned_by_user(child)) { - ret = tcp_rcv_state_process(child, skb, skb->h.th, skb->len); - + ret = tcp_rcv_state_process(child, skb, tcp_hdr(skb), + skb->len); /* Wakeup parent, send SIGIO */ if (state == TCP_SYN_RECV && child->sk_state != state) parent->sk_data_ready(parent, 0); diff -puN net/ipv4/tcp_output.c~git-net net/ipv4/tcp_output.c --- a/net/ipv4/tcp_output.c~git-net +++ a/net/ipv4/tcp_output.c @@ -62,14 +62,13 @@ int sysctl_tcp_base_mss __read_mostly = /* By default, RFC2861 behavior. */ int sysctl_tcp_slow_start_after_idle __read_mostly = 1; -static void update_send_head(struct sock *sk, struct tcp_sock *tp, - struct sk_buff *skb) +static void update_send_head(struct sock *sk, struct sk_buff *skb) { - sk->sk_send_head = skb->next; - if (sk->sk_send_head == (struct sk_buff *)&sk->sk_write_queue) - sk->sk_send_head = NULL; + struct tcp_sock *tp = tcp_sk(sk); + + tcp_advance_send_head(sk, skb); tp->snd_nxt = TCP_SKB_CB(skb)->end_seq; - tcp_packets_out_inc(sk, tp, skb); + tcp_packets_out_inc(sk, skb); } /* SND.NXT, if window was not shrunk. @@ -78,8 +77,10 @@ static void update_send_head(struct sock * Anything in between SND.UNA...SND.UNA+SND.WND also can be already * invalid. OK, let's make this for now: */ -static inline __u32 tcp_acceptable_seq(struct sock *sk, struct tcp_sock *tp) +static inline __u32 tcp_acceptable_seq(struct sock *sk) { + struct tcp_sock *tp = tcp_sk(sk); + if (!before(tp->snd_una+tp->snd_wnd, tp->snd_nxt)) return tp->snd_nxt; else @@ -238,7 +239,7 @@ static u16 tcp_select_window(struct sock u32 new_win = __tcp_select_window(sk); /* Never shrink the offered window */ - if(new_win < cur_win) { + if (new_win < cur_win) { /* Danger Will Robinson! * Don't update rcv_wup/rcv_wnd here or else * we will not be able to advertise a zero @@ -289,10 +290,12 @@ static void tcp_build_and_update_options (TCPOPT_SACK << 8) | (TCPOLEN_SACK_BASE + (tp->rx_opt.eff_sacks * TCPOLEN_SACK_PERBLOCK))); - for(this_sack = 0; this_sack < tp->rx_opt.eff_sacks; this_sack++) { + + for (this_sack = 0; this_sack < tp->rx_opt.eff_sacks; this_sack++) { *ptr++ = htonl(sp[this_sack].start_seq); *ptr++ = htonl(sp[this_sack].end_seq); } + if (tp->rx_opt.dsack) { tp->rx_opt.dsack = 0; tp->rx_opt.eff_sacks--; @@ -337,7 +340,7 @@ static void tcp_syn_build_options(__be32 */ *ptr++ = htonl((TCPOPT_MSS << 24) | (TCPOLEN_MSS << 16) | mss); if (ts) { - if(sack) + if (sack) *ptr++ = htonl((TCPOPT_SACK_PERM << 24) | (TCPOLEN_SACK_PERM << 16) | (TCPOPT_TIMESTAMP << 8) | @@ -349,7 +352,7 @@ static void tcp_syn_build_options(__be32 TCPOLEN_TIMESTAMP); *ptr++ = htonl(tstamp); /* TSVAL */ *ptr++ = htonl(ts_recent); /* TSECR */ - } else if(sack) + } else if (sack) *ptr++ = htonl((TCPOPT_NOP << 24) | (TCPOPT_NOP << 16) | (TCPOPT_SACK_PERM << 8) | @@ -430,7 +433,7 @@ static int tcp_transmit_skb(struct sock sysctl_flags = 0; if (unlikely(tcb->flags & TCPCB_FLAG_SYN)) { tcp_header_size = sizeof(struct tcphdr) + TCPOLEN_MSS; - if(sysctl_tcp_timestamps) { + if (sysctl_tcp_timestamps) { tcp_header_size += TCPOLEN_TSTAMP_ALIGNED; sysctl_flags |= SYSCTL_FLAG_TSTAMPS; } @@ -465,11 +468,12 @@ static int tcp_transmit_skb(struct sock tcp_header_size += TCPOLEN_MD5SIG_ALIGNED; #endif - th = (struct tcphdr *) skb_push(skb, tcp_header_size); - skb->h.th = th; + skb_push(skb, tcp_header_size); + skb_reset_transport_header(skb); skb_set_owner_w(skb, sk); /* Build TCP header and checksum it. */ + th = tcp_hdr(skb); th->source = inet->sport; th->dest = inet->dport; th->seq = htonl(tcb->seq); @@ -515,7 +519,7 @@ static int tcp_transmit_skb(struct sock md5 ? &md5_hash_location : #endif NULL); - TCP_ECN_send(sk, tp, skb, tcp_header_size); + TCP_ECN_send(sk, skb, tcp_header_size); } #ifdef CONFIG_TCP_MD5SIG @@ -524,7 +528,7 @@ static int tcp_transmit_skb(struct sock tp->af_specific->calc_md5_hash(md5_hash_location, md5, sk, NULL, NULL, - skb->h.th, + tcp_hdr(skb), sk->sk_protocol, skb->len); } @@ -545,7 +549,7 @@ static int tcp_transmit_skb(struct sock if (likely(err <= 0)) return err; - tcp_enter_cwr(sk); + tcp_enter_cwr(sk, 1); return net_xmit_eval(err); @@ -567,12 +571,8 @@ static void tcp_queue_skb(struct sock *s /* Advance write_seq and place onto the write_queue. */ tp->write_seq = TCP_SKB_CB(skb)->end_seq; skb_header_release(skb); - __skb_queue_tail(&sk->sk_write_queue, skb); + tcp_add_write_queue_tail(sk, skb); sk_charge_skb(sk, skb); - - /* Queue it, remembering where we must start sending. */ - if (sk->sk_send_head == NULL) - sk->sk_send_head = skb; } static void tcp_set_skb_tso_segs(struct sock *sk, struct sk_buff *skb, unsigned int mss_now) @@ -705,7 +705,7 @@ int tcp_fragment(struct sock *sk, struct /* Link BUFF into the send queue. */ skb_header_release(buff); - __skb_append(skb, buff, &sk->sk_write_queue); + tcp_insert_write_queue_after(skb, buff, sk); return 0; } @@ -736,7 +736,7 @@ static void __pskb_trim_head(struct sk_b } skb_shinfo(skb)->nr_frags = k; - skb->tail = skb->data; + skb_reset_tail_pointer(skb); skb->data_len -= len; skb->len = skb->data_len; } @@ -930,8 +930,9 @@ unsigned int tcp_current_mss(struct sock /* Congestion window validation. (RFC2861) */ -static void tcp_cwnd_validate(struct sock *sk, struct tcp_sock *tp) +static void tcp_cwnd_validate(struct sock *sk) { + struct tcp_sock *tp = tcp_sk(sk); __u32 packets_out = tp->packets_out; if (packets_out >= tp->snd_cwnd) { @@ -1056,7 +1057,7 @@ static inline int tcp_snd_wnd_test(struc return !after(end_seq, tp->snd_una + tp->snd_wnd); } -/* This checks if the data bearing packet SKB (usually sk->sk_send_head) +/* This checks if the data bearing packet SKB (usually tcp_send_head(sk)) * should be put on the wire right now. If so, it returns the number of * packets allowed by the congestion window. */ @@ -1079,15 +1080,10 @@ static unsigned int tcp_snd_test(struct return cwnd_quota; } -static inline int tcp_skb_is_last(const struct sock *sk, - const struct sk_buff *skb) -{ - return skb->next == (struct sk_buff *)&sk->sk_write_queue; -} - -int tcp_may_send_now(struct sock *sk, struct tcp_sock *tp) +int tcp_may_send_now(struct sock *sk) { - struct sk_buff *skb = sk->sk_send_head; + struct tcp_sock *tp = tcp_sk(sk); + struct sk_buff *skb = tcp_send_head(sk); return (skb && tcp_snd_test(sk, skb, tcp_current_mss(sk, 1), @@ -1143,7 +1139,7 @@ static int tso_fragment(struct sock *sk, /* Link BUFF into the send queue. */ skb_header_release(buff); - __skb_append(skb, buff, &sk->sk_write_queue); + tcp_insert_write_queue_after(skb, buff, sk); return 0; } @@ -1153,8 +1149,9 @@ static int tso_fragment(struct sock *sk, * * This algorithm is from John Heffner. */ -static int tcp_tso_should_defer(struct sock *sk, struct tcp_sock *tp, struct sk_buff *skb) +static int tcp_tso_should_defer(struct sock *sk, struct sk_buff *skb) { + struct tcp_sock *tp = tcp_sk(sk); const struct inet_connection_sock *icsk = inet_csk(sk); u32 send_win, cong_win, limit, in_flight; @@ -1249,10 +1246,10 @@ static int tcp_mtu_probe(struct sock *sk /* Have enough data in the send queue to probe? */ len = 0; - if ((skb = sk->sk_send_head) == NULL) + if ((skb = tcp_send_head(sk)) == NULL) return -1; while ((len += skb->len) < probe_size && !tcp_skb_is_last(sk, skb)) - skb = skb->next; + skb = tcp_write_queue_next(sk, skb); if (len < probe_size) return -1; @@ -1279,9 +1276,9 @@ static int tcp_mtu_probe(struct sock *sk return -1; sk_charge_skb(sk, nskb); - skb = sk->sk_send_head; - __skb_insert(nskb, skb->prev, skb, &sk->sk_write_queue); - sk->sk_send_head = nskb; + skb = tcp_send_head(sk); + tcp_insert_write_queue_before(nskb, skb, sk); + tcp_advance_send_head(sk, skb); TCP_SKB_CB(nskb)->seq = TCP_SKB_CB(skb)->seq; TCP_SKB_CB(nskb)->end_seq = TCP_SKB_CB(skb)->seq + probe_size; @@ -1292,7 +1289,7 @@ static int tcp_mtu_probe(struct sock *sk len = 0; while (len < probe_size) { - next = skb->next; + next = tcp_write_queue_next(sk, skb); copy = min_t(int, skb->len, probe_size - len); if (nskb->ip_summed) @@ -1305,7 +1302,7 @@ static int tcp_mtu_probe(struct sock *sk /* We've eaten all the data from this skb. * Throw it away. */ TCP_SKB_CB(nskb)->flags |= TCP_SKB_CB(skb)->flags; - __skb_unlink(skb, &sk->sk_write_queue); + tcp_unlink_write_queue(skb, sk); sk_stream_free_skb(sk, skb); } else { TCP_SKB_CB(nskb)->flags |= TCP_SKB_CB(skb)->flags & @@ -1333,7 +1330,7 @@ static int tcp_mtu_probe(struct sock *sk /* Decrement cwnd here because we are sending * effectively two packets. */ tp->snd_cwnd--; - update_send_head(sk, tp, nskb); + update_send_head(sk, nskb); icsk->icsk_mtup.probe_size = tcp_mss_to_mtu(sk, nskb->len); tp->mtu_probe.probe_seq_start = TCP_SKB_CB(nskb)->seq; @@ -1377,7 +1374,7 @@ static int tcp_write_xmit(struct sock *s sent_pkts = 1; } - while ((skb = sk->sk_send_head)) { + while ((skb = tcp_send_head(sk))) { unsigned int limit; tso_segs = tcp_init_tso_segs(sk, skb, mss_now); @@ -1396,7 +1393,7 @@ static int tcp_write_xmit(struct sock *s nonagle : TCP_NAGLE_PUSH)))) break; } else { - if (tcp_tso_should_defer(sk, tp, skb)) + if (tcp_tso_should_defer(sk, skb)) break; } @@ -1425,31 +1422,31 @@ static int tcp_write_xmit(struct sock *s /* Advance the send_head. This one is sent out. * This call will increment packets_out. */ - update_send_head(sk, tp, skb); + update_send_head(sk, skb); tcp_minshall_update(tp, mss_now, skb); sent_pkts++; } if (likely(sent_pkts)) { - tcp_cwnd_validate(sk, tp); + tcp_cwnd_validate(sk); return 0; } - return !tp->packets_out && sk->sk_send_head; + return !tp->packets_out && tcp_send_head(sk); } /* Push out any pending frames which were held back due to * TCP_CORK or attempt at coalescing tiny packets. * The socket must be locked by the caller. */ -void __tcp_push_pending_frames(struct sock *sk, struct tcp_sock *tp, - unsigned int cur_mss, int nonagle) +void __tcp_push_pending_frames(struct sock *sk, unsigned int cur_mss, + int nonagle) { - struct sk_buff *skb = sk->sk_send_head; + struct sk_buff *skb = tcp_send_head(sk); if (skb) { if (tcp_write_xmit(sk, cur_mss, nonagle)) - tcp_check_probe_timer(sk, tp); + tcp_check_probe_timer(sk); } } @@ -1459,7 +1456,7 @@ void __tcp_push_pending_frames(struct so void tcp_push_one(struct sock *sk, unsigned int mss_now) { struct tcp_sock *tp = tcp_sk(sk); - struct sk_buff *skb = sk->sk_send_head; + struct sk_buff *skb = tcp_send_head(sk); unsigned int tso_segs, cwnd_quota; BUG_ON(!skb || skb->len < mss_now); @@ -1493,8 +1490,8 @@ void tcp_push_one(struct sock *sk, unsig TCP_SKB_CB(skb)->when = tcp_time_stamp; if (likely(!tcp_transmit_skb(sk, skb, 1, sk->sk_allocation))) { - update_send_head(sk, tp, skb); - tcp_cwnd_validate(sk, tp); + update_send_head(sk, skb); + tcp_cwnd_validate(sk); return; } } @@ -1620,7 +1617,7 @@ u32 __tcp_select_window(struct sock *sk) static void tcp_retrans_try_collapse(struct sock *sk, struct sk_buff *skb, int mss_now) { struct tcp_sock *tp = tcp_sk(sk); - struct sk_buff *next_skb = skb->next; + struct sk_buff *next_skb = tcp_write_queue_next(sk, skb); /* The first test we must make is that neither of these two * SKB's are still referenced by someone else. @@ -1630,7 +1627,7 @@ static void tcp_retrans_try_collapse(str u16 flags = TCP_SKB_CB(skb)->flags; /* Also punt if next skb has been SACK'd. */ - if(TCP_SKB_CB(next_skb)->sacked & TCPCB_SACKED_ACKED) + if (TCP_SKB_CB(next_skb)->sacked & TCPCB_SACKED_ACKED) return; /* Next skb is out of window. */ @@ -1652,9 +1649,11 @@ static void tcp_retrans_try_collapse(str clear_all_retrans_hints(tp); /* Ok. We will be able to collapse the packet. */ - __skb_unlink(next_skb, &sk->sk_write_queue); + tcp_unlink_write_queue(next_skb, sk); - memcpy(skb_put(skb, next_skb_size), next_skb->data, next_skb_size); + skb_copy_from_linear_data(next_skb, + skb_put(skb, next_skb_size), + next_skb_size); if (next_skb->ip_summed == CHECKSUM_PARTIAL) skb->ip_summed = CHECKSUM_PARTIAL; @@ -1706,7 +1705,9 @@ void tcp_simple_retransmit(struct sock * unsigned int mss = tcp_current_mss(sk, 0); int lost = 0; - sk_stream_for_retrans_queue(skb, sk) { + tcp_for_write_queue(skb, sk) { + if (skb == tcp_send_head(sk)) + break; if (skb->len > mss && !(TCP_SKB_CB(skb)->sacked&TCPCB_SACKED_ACKED)) { if (TCP_SKB_CB(skb)->sacked&TCPCB_SACKED_RETRANS) { @@ -1788,13 +1789,13 @@ int tcp_retransmit_skb(struct sock *sk, } /* Collapse two adjacent packets if worthwhile and we can. */ - if(!(TCP_SKB_CB(skb)->flags & TCPCB_FLAG_SYN) && - (skb->len < (cur_mss >> 1)) && - (skb->next != sk->sk_send_head) && - (skb->next != (struct sk_buff *)&sk->sk_write_queue) && - (skb_shinfo(skb)->nr_frags == 0 && skb_shinfo(skb->next)->nr_frags == 0) && - (tcp_skb_pcount(skb) == 1 && tcp_skb_pcount(skb->next) == 1) && - (sysctl_tcp_retrans_collapse != 0)) + if (!(TCP_SKB_CB(skb)->flags & TCPCB_FLAG_SYN) && + (skb->len < (cur_mss >> 1)) && + (tcp_write_queue_next(sk, skb) != tcp_send_head(sk)) && + (!tcp_skb_is_last(sk, skb)) && + (skb_shinfo(skb)->nr_frags == 0 && skb_shinfo(tcp_write_queue_next(sk, skb))->nr_frags == 0) && + (tcp_skb_pcount(skb) == 1 && tcp_skb_pcount(tcp_write_queue_next(sk, skb)) == 1) && + (sysctl_tcp_retrans_collapse != 0)) tcp_retrans_try_collapse(sk, skb, cur_mss); if (inet_csk(sk)->icsk_af_ops->rebuild_header(sk)) @@ -1804,9 +1805,9 @@ int tcp_retransmit_skb(struct sock *sk, * retransmit when old data is attached. So strip it off * since it is cheap to do so and saves bytes on the network. */ - if(skb->len > 0 && - (TCP_SKB_CB(skb)->flags & TCPCB_FLAG_FIN) && - tp->snd_una == (TCP_SKB_CB(skb)->end_seq - 1)) { + if (skb->len > 0 && + (TCP_SKB_CB(skb)->flags & TCPCB_FLAG_FIN) && + tp->snd_una == (TCP_SKB_CB(skb)->end_seq - 1)) { if (!pskb_trim(skb, 0)) { TCP_SKB_CB(skb)->seq = TCP_SKB_CB(skb)->end_seq - 1; skb_shinfo(skb)->gso_segs = 1; @@ -1872,15 +1873,17 @@ void tcp_xmit_retransmit_queue(struct so skb = tp->retransmit_skb_hint; packet_cnt = tp->retransmit_cnt_hint; }else{ - skb = sk->sk_write_queue.next; + skb = tcp_write_queue_head(sk); packet_cnt = 0; } /* First pass: retransmit lost packets. */ if (tp->lost_out) { - sk_stream_for_retrans_queue_from(skb, sk) { + tcp_for_write_queue_from(skb, sk) { __u8 sacked = TCP_SKB_CB(skb)->sacked; + if (skb == tcp_send_head(sk)) + break; /* we could do better than to assign each time */ tp->retransmit_skb_hint = skb; tp->retransmit_cnt_hint = packet_cnt; @@ -1906,8 +1909,7 @@ void tcp_xmit_retransmit_queue(struct so else NET_INC_STATS_BH(LINUX_MIB_TCPSLOWSTARTRETRANS); - if (skb == - skb_peek(&sk->sk_write_queue)) + if (skb == tcp_write_queue_head(sk)) inet_csk_reset_xmit_timer(sk, ICSK_TIME_RETRANS, inet_csk(sk)->icsk_rto, TCP_RTO_MAX); @@ -1937,18 +1939,20 @@ void tcp_xmit_retransmit_queue(struct so * segments to send. */ - if (tcp_may_send_now(sk, tp)) + if (tcp_may_send_now(sk)) return; if (tp->forward_skb_hint) { skb = tp->forward_skb_hint; packet_cnt = tp->forward_cnt_hint; } else{ - skb = sk->sk_write_queue.next; + skb = tcp_write_queue_head(sk); packet_cnt = 0; } - sk_stream_for_retrans_queue_from(skb, sk) { + tcp_for_write_queue_from(skb, sk) { + if (skb == tcp_send_head(sk)) + break; tp->forward_cnt_hint = packet_cnt; tp->forward_skb_hint = skb; @@ -1973,7 +1977,7 @@ void tcp_xmit_retransmit_queue(struct so break; } - if (skb == skb_peek(&sk->sk_write_queue)) + if (skb == tcp_write_queue_head(sk)) inet_csk_reset_xmit_timer(sk, ICSK_TIME_RETRANS, inet_csk(sk)->icsk_rto, TCP_RTO_MAX); @@ -1989,7 +1993,7 @@ void tcp_xmit_retransmit_queue(struct so void tcp_send_fin(struct sock *sk) { struct tcp_sock *tp = tcp_sk(sk); - struct sk_buff *skb = skb_peek_tail(&sk->sk_write_queue); + struct sk_buff *skb = tcp_write_queue_tail(sk); int mss_now; /* Optimization, tack on the FIN if we have a queue of @@ -1998,7 +2002,7 @@ void tcp_send_fin(struct sock *sk) */ mss_now = tcp_current_mss(sk, 1); - if (sk->sk_send_head != NULL) { + if (tcp_send_head(sk) != NULL) { TCP_SKB_CB(skb)->flags |= TCPCB_FLAG_FIN; TCP_SKB_CB(skb)->end_seq++; tp->write_seq++; @@ -2025,7 +2029,7 @@ void tcp_send_fin(struct sock *sk) TCP_SKB_CB(skb)->end_seq = TCP_SKB_CB(skb)->seq + 1; tcp_queue_skb(sk, skb); } - __tcp_push_pending_frames(sk, tp, mss_now, TCP_NAGLE_OFF); + __tcp_push_pending_frames(sk, mss_now, TCP_NAGLE_OFF); } /* We get here when a process closes a file descriptor (either due to @@ -2035,7 +2039,6 @@ void tcp_send_fin(struct sock *sk) */ void tcp_send_active_reset(struct sock *sk, gfp_t priority) { - struct tcp_sock *tp = tcp_sk(sk); struct sk_buff *skb; /* NOTE: No TCP options attached and we never retransmit this. */ @@ -2055,7 +2058,7 @@ void tcp_send_active_reset(struct sock * skb_shinfo(skb)->gso_type = 0; /* Send it off. */ - TCP_SKB_CB(skb)->seq = tcp_acceptable_seq(sk, tp); + TCP_SKB_CB(skb)->seq = tcp_acceptable_seq(sk); TCP_SKB_CB(skb)->end_seq = TCP_SKB_CB(skb)->seq; TCP_SKB_CB(skb)->when = tcp_time_stamp; if (tcp_transmit_skb(sk, skb, 0, priority)) @@ -2071,7 +2074,7 @@ int tcp_send_synack(struct sock *sk) { struct sk_buff* skb; - skb = skb_peek(&sk->sk_write_queue); + skb = tcp_write_queue_head(sk); if (skb == NULL || !(TCP_SKB_CB(skb)->flags&TCPCB_FLAG_SYN)) { printk(KERN_DEBUG "tcp_send_synack: wrong queue state\n"); return -EFAULT; @@ -2081,9 +2084,9 @@ int tcp_send_synack(struct sock *sk) struct sk_buff *nskb = skb_copy(skb, GFP_ATOMIC); if (nskb == NULL) return -ENOMEM; - __skb_unlink(skb, &sk->sk_write_queue); + tcp_unlink_write_queue(skb, sk); skb_header_release(nskb); - __skb_queue_head(&sk->sk_write_queue, nskb); + __tcp_add_write_queue_head(sk, nskb); sk_stream_free_skb(sk, skb); sk_charge_skb(sk, nskb); skb = nskb; @@ -2133,8 +2136,10 @@ struct sk_buff * tcp_make_synack(struct if (md5) tcp_header_size += TCPOLEN_MD5SIG_ALIGNED; #endif - skb->h.th = th = (struct tcphdr *) skb_push(skb, tcp_header_size); + skb_push(skb, tcp_header_size); + skb_reset_transport_header(skb); + th = tcp_hdr(skb); memset(th, 0, sizeof(struct tcphdr)); th->syn = 1; th->ack = 1; @@ -2188,7 +2193,7 @@ struct sk_buff * tcp_make_synack(struct tp->af_specific->calc_md5_hash(md5_hash_location, md5, NULL, dst, req, - skb->h.th, sk->sk_protocol, + tcp_hdr(skb), sk->sk_protocol, skb->len); } #endif @@ -2271,7 +2276,7 @@ int tcp_connect(struct sock *sk) skb_reserve(buff, MAX_TCP_HEADER); TCP_SKB_CB(buff)->flags = TCPCB_FLAG_SYN; - TCP_ECN_send_syn(sk, tp, buff); + TCP_ECN_send_syn(sk, buff); TCP_SKB_CB(buff)->sacked = 0; skb_shinfo(buff)->gso_segs = 1; skb_shinfo(buff)->gso_size = 0; @@ -2285,7 +2290,7 @@ int tcp_connect(struct sock *sk) TCP_SKB_CB(buff)->when = tcp_time_stamp; tp->retrans_stamp = TCP_SKB_CB(buff)->when; skb_header_release(buff); - __skb_queue_tail(&sk->sk_write_queue, buff); + __tcp_add_write_queue_tail(sk, buff); sk_charge_skb(sk, buff); tp->packets_out += tcp_skb_pcount(buff); tcp_transmit_skb(sk, buff, 1, GFP_KERNEL); @@ -2363,7 +2368,6 @@ void tcp_send_ack(struct sock *sk) { /* If we have been reset, we may not send again. */ if (sk->sk_state != TCP_CLOSE) { - struct tcp_sock *tp = tcp_sk(sk); struct sk_buff *buff; /* We are not putting this on the write queue, so @@ -2389,7 +2393,7 @@ void tcp_send_ack(struct sock *sk) skb_shinfo(buff)->gso_type = 0; /* Send it off, this clears delayed acks for us. */ - TCP_SKB_CB(buff)->seq = TCP_SKB_CB(buff)->end_seq = tcp_acceptable_seq(sk, tp); + TCP_SKB_CB(buff)->seq = TCP_SKB_CB(buff)->end_seq = tcp_acceptable_seq(sk); TCP_SKB_CB(buff)->when = tcp_time_stamp; tcp_transmit_skb(sk, buff, 0, GFP_ATOMIC); } @@ -2441,7 +2445,7 @@ int tcp_write_wakeup(struct sock *sk) struct tcp_sock *tp = tcp_sk(sk); struct sk_buff *skb; - if ((skb = sk->sk_send_head) != NULL && + if ((skb = tcp_send_head(sk)) != NULL && before(TCP_SKB_CB(skb)->seq, tp->snd_una+tp->snd_wnd)) { int err; unsigned int mss = tcp_current_mss(sk, 0); @@ -2467,7 +2471,7 @@ int tcp_write_wakeup(struct sock *sk) TCP_SKB_CB(skb)->when = tcp_time_stamp; err = tcp_transmit_skb(sk, skb, 1, GFP_ATOMIC); if (!err) { - update_send_head(sk, tp, skb); + update_send_head(sk, skb); } return err; } else { @@ -2491,7 +2495,7 @@ void tcp_send_probe0(struct sock *sk) err = tcp_write_wakeup(sk); - if (tp->packets_out || !sk->sk_send_head) { + if (tp->packets_out || !tcp_send_head(sk)) { /* Cancel probe timer, if it is not required. */ icsk->icsk_probes_out = 0; icsk->icsk_backoff = 0; diff -puN net/ipv4/tcp_probe.c~git-net net/ipv4/tcp_probe.c --- a/net/ipv4/tcp_probe.c~git-net +++ a/net/ipv4/tcp_probe.c @@ -26,6 +26,8 @@ #include #include #include +#include +#include #include #include @@ -34,43 +36,45 @@ MODULE_AUTHOR("Stephen Hemminger dport) == port || - ntohs(inet->sport) == port) { + /* Only update if port matches */ + if ((port == 0 || ntohs(inet->dport) == port || ntohs(inet->sport) == port) + && (full || tp->snd_cwnd != tcpw.lastcwnd)) { printl("%d.%d.%d.%d:%u %d.%d.%d.%d:%u %d %#x %#x %u %u %u\n", NIPQUAD(inet->saddr), ntohs(inet->sport), NIPQUAD(inet->daddr), ntohs(inet->dport), - size, tp->snd_nxt, tp->snd_una, + skb->len, tp->snd_nxt, tp->snd_una, tp->snd_cwnd, tcp_current_ssthresh(sk), - tp->snd_wnd); + tp->snd_wnd, tp->srtt >> 3); + tcpw.lastcwnd = tp->snd_cwnd; } jprobe_return(); return 0; } -static struct jprobe tcp_send_probe = { +static struct jprobe tcp_probe = { .kp = { - .symbol_name = "tcp_sendmsg", + .symbol_name = "tcp_rcv_established", }, - .entry = JPROBE_ENTRY(jtcp_sendmsg), + .entry = JPROBE_ENTRY(jtcp_rcv_established), }; static int tcpprobe_open(struct inode * inode, struct file * file) { kfifo_reset(tcpw.fifo); - do_gettimeofday(&tcpw.tstart); + tcpw.start = ktime_get(); return 0; } @@ -162,7 +172,7 @@ static __init int tcpprobe_init(void) if (!proc_net_fops_create(procname, S_IRUSR, &tcpprobe_fops)) goto err0; - ret = register_jprobe(&tcp_send_probe); + ret = register_jprobe(&tcp_probe); if (ret) goto err1; @@ -180,7 +190,7 @@ static __exit void tcpprobe_exit(void) { kfifo_free(tcpw.fifo); proc_net_remove(procname); - unregister_jprobe(&tcp_send_probe); + unregister_jprobe(&tcp_probe); } module_exit(tcpprobe_exit); diff -puN net/ipv4/tcp_timer.c~git-net net/ipv4/tcp_timer.c --- a/net/ipv4/tcp_timer.c~git-net +++ a/net/ipv4/tcp_timer.c @@ -233,7 +233,7 @@ static void tcp_probe_timer(struct sock struct tcp_sock *tp = tcp_sk(sk); int max_probes; - if (tp->packets_out || !sk->sk_send_head) { + if (tp->packets_out || !tcp_send_head(sk)) { icsk->icsk_probes_out = 0; return; } @@ -284,7 +284,7 @@ static void tcp_retransmit_timer(struct if (!tp->packets_out) goto out; - BUG_TRAP(!skb_queue_empty(&sk->sk_write_queue)); + BUG_TRAP(!tcp_write_queue_empty(sk)); if (!tp->snd_wnd && !sock_flag(sk, SOCK_DEAD) && !((1 << sk->sk_state) & (TCPF_SYN_SENT | TCPF_SYN_RECV))) { @@ -306,7 +306,7 @@ static void tcp_retransmit_timer(struct goto out; } tcp_enter_loss(sk, 0); - tcp_retransmit_skb(sk, skb_peek(&sk->sk_write_queue)); + tcp_retransmit_skb(sk, tcp_write_queue_head(sk)); __sk_dst_reset(sk); goto out_reset_timer; } @@ -341,7 +341,7 @@ static void tcp_retransmit_timer(struct tcp_enter_loss(sk, 0); } - if (tcp_retransmit_skb(sk, skb_peek(&sk->sk_write_queue)) > 0) { + if (tcp_retransmit_skb(sk, tcp_write_queue_head(sk)) > 0) { /* Retransmission failed because of local congestion, * do not backoff. */ @@ -482,7 +482,7 @@ static void tcp_keepalive_timer (unsigne elapsed = keepalive_time_when(tp); /* It is alive without keepalive 8) */ - if (tp->packets_out || sk->sk_send_head) + if (tp->packets_out || tcp_send_head(sk)) goto resched; elapsed = tcp_time_stamp - tp->rcv_tstamp; diff -puN net/ipv4/tcp_vegas.c~git-net net/ipv4/tcp_vegas.c --- a/net/ipv4/tcp_vegas.c~git-net +++ a/net/ipv4/tcp_vegas.c @@ -341,16 +341,14 @@ static void tcp_vegas_get_info(struct so { const struct vegas *ca = inet_csk_ca(sk); if (ext & (1 << (INET_DIAG_VEGASINFO - 1))) { - struct tcpvegas_info *info; + struct tcpvegas_info info = { + .tcpv_enabled = ca->doing_vegas_now, + .tcpv_rttcnt = ca->cntRTT, + .tcpv_rtt = ca->baseRTT, + .tcpv_minrtt = ca->minRTT, + }; - info = RTA_DATA(__RTA_PUT(skb, INET_DIAG_VEGASINFO, - sizeof(*info))); - - info->tcpv_enabled = ca->doing_vegas_now; - info->tcpv_rttcnt = ca->cntRTT; - info->tcpv_rtt = ca->baseRTT; - info->tcpv_minrtt = ca->minRTT; - rtattr_failure: ; + nla_put(skb, INET_DIAG_VEGASINFO, sizeof(info), &info); } } diff -puN net/ipv4/tcp_westwood.c~git-net net/ipv4/tcp_westwood.c --- a/net/ipv4/tcp_westwood.c~git-net +++ a/net/ipv4/tcp_westwood.c @@ -226,7 +226,7 @@ static void tcp_westwood_event(struct so struct tcp_sock *tp = tcp_sk(sk); struct westwood *w = inet_csk_ca(sk); - switch(event) { + switch (event) { case CA_EVENT_FAST_ACK: westwood_fast_bw(sk); break; @@ -260,16 +260,13 @@ static void tcp_westwood_info(struct soc { const struct westwood *ca = inet_csk_ca(sk); if (ext & (1 << (INET_DIAG_VEGASINFO - 1))) { - struct rtattr *rta; - struct tcpvegas_info *info; + struct tcpvegas_info info = { + .tcpv_enabled = 1, + .tcpv_rtt = jiffies_to_usecs(ca->rtt), + .tcpv_minrtt = jiffies_to_usecs(ca->rtt_min), + }; - rta = __RTA_PUT(skb, INET_DIAG_VEGASINFO, sizeof(*info)); - info = RTA_DATA(rta); - info->tcpv_enabled = 1; - info->tcpv_rttcnt = 0; - info->tcpv_rtt = jiffies_to_usecs(ca->rtt); - info->tcpv_minrtt = jiffies_to_usecs(ca->rtt_min); - rtattr_failure: ; + nla_put(skb, INET_DIAG_VEGASINFO, sizeof(info), &info); } } diff -puN /dev/null net/ipv4/tcp_yeah.c --- /dev/null +++ a/net/ipv4/tcp_yeah.c @@ -0,0 +1,271 @@ +/* + * + * YeAH TCP + * + * For further details look at: + * http://wil.cs.caltech.edu/pfldnet2007/paper/YeAH_TCP.pdf + * + */ + +#include "tcp_yeah.h" + +/* Default values of the Vegas variables, in fixed-point representation + * with V_PARAM_SHIFT bits to the right of the binary point. + */ +#define V_PARAM_SHIFT 1 + +#define TCP_YEAH_ALPHA 80 //lin number of packets queued at the bottleneck +#define TCP_YEAH_GAMMA 1 //lin fraction of queue to be removed per rtt +#define TCP_YEAH_DELTA 3 //log minimum fraction of cwnd to be removed on loss +#define TCP_YEAH_EPSILON 1 //log maximum fraction to be removed on early decongestion +#define TCP_YEAH_PHY 8 //lin maximum delta from base +#define TCP_YEAH_RHO 16 //lin minumum number of consecutive rtt to consider competition on loss +#define TCP_YEAH_ZETA 50 //lin minimum number of state switchs to reset reno_count + +#define TCP_SCALABLE_AI_CNT 100U + +/* YeAH variables */ +struct yeah { + /* Vegas */ + u32 beg_snd_nxt; /* right edge during last RTT */ + u32 beg_snd_una; /* left edge during last RTT */ + u32 beg_snd_cwnd; /* saves the size of the cwnd */ + u8 doing_vegas_now;/* if true, do vegas for this RTT */ + u16 cntRTT; /* # of RTTs measured within last RTT */ + u32 minRTT; /* min of RTTs measured within last RTT (in usec) */ + u32 baseRTT; /* the min of all Vegas RTT measurements seen (in usec) */ + + /* YeAH */ + u32 lastQ; + u32 doing_reno_now; + + u32 reno_count; + u32 fast_count; + + u32 pkts_acked; +}; + +static void tcp_yeah_init(struct sock *sk) +{ + struct tcp_sock *tp = tcp_sk(sk); + struct yeah *yeah = inet_csk_ca(sk); + + tcp_vegas_init(sk); + + yeah->doing_reno_now = 0; + yeah->lastQ = 0; + + yeah->reno_count = 2; + + /* Ensure the MD arithmetic works. This is somewhat pedantic, + * since I don't think we will see a cwnd this large. :) */ + tp->snd_cwnd_clamp = min_t(u32, tp->snd_cwnd_clamp, 0xffffffff/128); + +} + + +static void tcp_yeah_pkts_acked(struct sock *sk, u32 pkts_acked) +{ + const struct inet_connection_sock *icsk = inet_csk(sk); + struct yeah *yeah = inet_csk_ca(sk); + + if (icsk->icsk_ca_state == TCP_CA_Open) + yeah->pkts_acked = pkts_acked; +} + +static void tcp_yeah_cong_avoid(struct sock *sk, u32 ack, + u32 seq_rtt, u32 in_flight, int flag) +{ + struct tcp_sock *tp = tcp_sk(sk); + struct yeah *yeah = inet_csk_ca(sk); + + if (!tcp_is_cwnd_limited(sk, in_flight)) + return; + + if (tp->snd_cwnd <= tp->snd_ssthresh) { + tcp_slow_start(tp); + } else if (!yeah->doing_reno_now) { + /* Scalable */ + + tp->snd_cwnd_cnt+=yeah->pkts_acked; + if (tp->snd_cwnd_cnt > min(tp->snd_cwnd, TCP_SCALABLE_AI_CNT)){ + if (tp->snd_cwnd < tp->snd_cwnd_clamp) + tp->snd_cwnd++; + tp->snd_cwnd_cnt = 0; + } + + yeah->pkts_acked = 1; + + } else { + /* Reno */ + + if (tp->snd_cwnd_cnt < tp->snd_cwnd) + tp->snd_cwnd_cnt++; + + if (tp->snd_cwnd_cnt >= tp->snd_cwnd) { + tp->snd_cwnd++; + tp->snd_cwnd_cnt = 0; + } + } + + /* The key players are v_beg_snd_una and v_beg_snd_nxt. + * + * These are so named because they represent the approximate values + * of snd_una and snd_nxt at the beginning of the current RTT. More + * precisely, they represent the amount of data sent during the RTT. + * At the end of the RTT, when we receive an ACK for v_beg_snd_nxt, + * we will calculate that (v_beg_snd_nxt - v_beg_snd_una) outstanding + * bytes of data have been ACKed during the course of the RTT, giving + * an "actual" rate of: + * + * (v_beg_snd_nxt - v_beg_snd_una) / (rtt duration) + * + * Unfortunately, v_beg_snd_una is not exactly equal to snd_una, + * because delayed ACKs can cover more than one segment, so they + * don't line up yeahly with the boundaries of RTTs. + * + * Another unfortunate fact of life is that delayed ACKs delay the + * advance of the left edge of our send window, so that the number + * of bytes we send in an RTT is often less than our cwnd will allow. + * So we keep track of our cwnd separately, in v_beg_snd_cwnd. + */ + + if (after(ack, yeah->beg_snd_nxt)) { + + /* We do the Vegas calculations only if we got enough RTT + * samples that we can be reasonably sure that we got + * at least one RTT sample that wasn't from a delayed ACK. + * If we only had 2 samples total, + * then that means we're getting only 1 ACK per RTT, which + * means they're almost certainly delayed ACKs. + * If we have 3 samples, we should be OK. + */ + + if (yeah->cntRTT > 2) { + u32 rtt, queue; + u64 bw; + + /* We have enough RTT samples, so, using the Vegas + * algorithm, we determine if we should increase or + * decrease cwnd, and by how much. + */ + + /* Pluck out the RTT we are using for the Vegas + * calculations. This is the min RTT seen during the + * last RTT. Taking the min filters out the effects + * of delayed ACKs, at the cost of noticing congestion + * a bit later. + */ + rtt = yeah->minRTT; + + /* Compute excess number of packets above bandwidth + * Avoid doing full 64 bit divide. + */ + bw = tp->snd_cwnd; + bw *= rtt - yeah->baseRTT; + do_div(bw, rtt); + queue = bw; + + if (queue > TCP_YEAH_ALPHA || + rtt - yeah->baseRTT > (yeah->baseRTT / TCP_YEAH_PHY)) { + if (queue > TCP_YEAH_ALPHA + && tp->snd_cwnd > yeah->reno_count) { + u32 reduction = min(queue / TCP_YEAH_GAMMA , + tp->snd_cwnd >> TCP_YEAH_EPSILON); + + tp->snd_cwnd -= reduction; + + tp->snd_cwnd = max(tp->snd_cwnd, + yeah->reno_count); + + tp->snd_ssthresh = tp->snd_cwnd; + } + + if (yeah->reno_count <= 2) + yeah->reno_count = max(tp->snd_cwnd>>1, 2U); + else + yeah->reno_count++; + + yeah->doing_reno_now = min(yeah->doing_reno_now + 1, + 0xffffffU); + } else { + yeah->fast_count++; + + if (yeah->fast_count > TCP_YEAH_ZETA) { + yeah->reno_count = 2; + yeah->fast_count = 0; + } + + yeah->doing_reno_now = 0; + } + + yeah->lastQ = queue; + + } + + /* Save the extent of the current window so we can use this + * at the end of the next RTT. + */ + yeah->beg_snd_una = yeah->beg_snd_nxt; + yeah->beg_snd_nxt = tp->snd_nxt; + yeah->beg_snd_cwnd = tp->snd_cwnd; + + /* Wipe the slate clean for the next RTT. */ + yeah->cntRTT = 0; + yeah->minRTT = 0x7fffffff; + } +} + +static u32 tcp_yeah_ssthresh(struct sock *sk) { + const struct tcp_sock *tp = tcp_sk(sk); + struct yeah *yeah = inet_csk_ca(sk); + u32 reduction; + + if (yeah->doing_reno_now < TCP_YEAH_RHO) { + reduction = yeah->lastQ; + + reduction = min( reduction, max(tp->snd_cwnd>>1, 2U) ); + + reduction = max( reduction, tp->snd_cwnd >> TCP_YEAH_DELTA); + } else + reduction = max(tp->snd_cwnd>>1,2U); + + yeah->fast_count = 0; + yeah->reno_count = max(yeah->reno_count>>1, 2U); + + return tp->snd_cwnd - reduction; +} + +static struct tcp_congestion_ops tcp_yeah = { + .init = tcp_yeah_init, + .ssthresh = tcp_yeah_ssthresh, + .cong_avoid = tcp_yeah_cong_avoid, + .min_cwnd = tcp_reno_min_cwnd, + .rtt_sample = tcp_vegas_rtt_calc, + .set_state = tcp_vegas_state, + .cwnd_event = tcp_vegas_cwnd_event, + .get_info = tcp_vegas_get_info, + .pkts_acked = tcp_yeah_pkts_acked, + + .owner = THIS_MODULE, + .name = "yeah", +}; + +static int __init tcp_yeah_register(void) +{ + BUG_ON(sizeof(struct yeah) > ICSK_CA_PRIV_SIZE); + tcp_register_congestion_control(&tcp_yeah); + return 0; +} + +static void __exit tcp_yeah_unregister(void) +{ + tcp_unregister_congestion_control(&tcp_yeah); +} + +module_init(tcp_yeah_register); +module_exit(tcp_yeah_unregister); + +MODULE_AUTHOR("Angelo P. Castellani"); +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("YeAH TCP"); diff -puN /dev/null net/ipv4/tcp_yeah.h --- /dev/null +++ a/net/ipv4/tcp_yeah.h @@ -0,0 +1,135 @@ +#include +#include +#include +#include +#include + +#include + +/* Vegas variables */ +struct vegas { + u32 beg_snd_nxt; /* right edge during last RTT */ + u32 beg_snd_una; /* left edge during last RTT */ + u32 beg_snd_cwnd; /* saves the size of the cwnd */ + u8 doing_vegas_now;/* if true, do vegas for this RTT */ + u16 cntRTT; /* # of RTTs measured within last RTT */ + u32 minRTT; /* min of RTTs measured within last RTT (in usec) */ + u32 baseRTT; /* the min of all Vegas RTT measurements seen (in usec) */ +}; + +/* There are several situations when we must "re-start" Vegas: + * + * o when a connection is established + * o after an RTO + * o after fast recovery + * o when we send a packet and there is no outstanding + * unacknowledged data (restarting an idle connection) + * + * In these circumstances we cannot do a Vegas calculation at the + * end of the first RTT, because any calculation we do is using + * stale info -- both the saved cwnd and congestion feedback are + * stale. + * + * Instead we must wait until the completion of an RTT during + * which we actually receive ACKs. + */ +static inline void vegas_enable(struct sock *sk) +{ + const struct tcp_sock *tp = tcp_sk(sk); + struct vegas *vegas = inet_csk_ca(sk); + + /* Begin taking Vegas samples next time we send something. */ + vegas->doing_vegas_now = 1; + + /* Set the beginning of the next send window. */ + vegas->beg_snd_nxt = tp->snd_nxt; + + vegas->cntRTT = 0; + vegas->minRTT = 0x7fffffff; +} + +/* Stop taking Vegas samples for now. */ +static inline void vegas_disable(struct sock *sk) +{ + struct vegas *vegas = inet_csk_ca(sk); + + vegas->doing_vegas_now = 0; +} + +static void tcp_vegas_init(struct sock *sk) +{ + struct vegas *vegas = inet_csk_ca(sk); + + vegas->baseRTT = 0x7fffffff; + vegas_enable(sk); +} + +static void tcp_vegas_state(struct sock *sk, u8 ca_state) +{ + + if (ca_state == TCP_CA_Open) + vegas_enable(sk); + else + vegas_disable(sk); +} + +/* Do RTT sampling needed for Vegas. + * Basically we: + * o min-filter RTT samples from within an RTT to get the current + * propagation delay + queuing delay (we are min-filtering to try to + * avoid the effects of delayed ACKs) + * o min-filter RTT samples from a much longer window (forever for now) + * to find the propagation delay (baseRTT) + */ +static void tcp_vegas_rtt_calc(struct sock *sk, u32 usrtt) +{ + struct vegas *vegas = inet_csk_ca(sk); + u32 vrtt = usrtt + 1; /* Never allow zero rtt or baseRTT */ + + /* Filter to find propagation delay: */ + if (vrtt < vegas->baseRTT) + vegas->baseRTT = vrtt; + + /* Find the min RTT during the last RTT to find + * the current prop. delay + queuing delay: + */ + vegas->minRTT = min(vegas->minRTT, vrtt); + vegas->cntRTT++; +} + +/* + * If the connection is idle and we are restarting, + * then we don't want to do any Vegas calculations + * until we get fresh RTT samples. So when we + * restart, we reset our Vegas state to a clean + * slate. After we get acks for this flight of + * packets, _then_ we can make Vegas calculations + * again. + */ +static void tcp_vegas_cwnd_event(struct sock *sk, enum tcp_ca_event event) +{ + if (event == CA_EVENT_CWND_RESTART || + event == CA_EVENT_TX_START) + tcp_vegas_init(sk); +} + +/* Extract info for Tcp socket info provided via netlink. */ +static void tcp_vegas_get_info(struct sock *sk, u32 ext, + struct sk_buff *skb) +{ + const struct vegas *ca = inet_csk_ca(sk); + if (ext & (1 << (INET_DIAG_VEGASINFO - 1))) { + struct tcpvegas_info *info; + + info = RTA_DATA(__RTA_PUT(skb, INET_DIAG_VEGASINFO, + sizeof(*info))); + + info->tcpv_enabled = ca->doing_vegas_now; + info->tcpv_rttcnt = ca->cntRTT; + info->tcpv_rtt = ca->baseRTT; + info->tcpv_minrtt = ca->minRTT; + rtattr_failure: ; + } +} + + diff -puN net/ipv4/udp.c~git-net net/ipv4/udp.c --- a/net/ipv4/udp.c~git-net +++ a/net/ipv4/udp.c @@ -175,7 +175,8 @@ int __udp_lib_get_port(struct sock *sk, ; } result = best; - for(i = 0; i < (1 << 16) / UDP_HTABLE_SIZE; i++, result += UDP_HTABLE_SIZE) { + for (i = 0; i < (1 << 16) / UDP_HTABLE_SIZE; + i++, result += UDP_HTABLE_SIZE) { if (result > sysctl_local_port_range[1]) result = sysctl_local_port_range[0] + ((result - sysctl_local_port_range[0]) & @@ -212,13 +213,13 @@ fail: return error; } -__inline__ int udp_get_port(struct sock *sk, unsigned short snum, +int udp_get_port(struct sock *sk, unsigned short snum, int (*scmp)(const struct sock *, const struct sock *)) { return __udp_lib_get_port(sk, snum, udp_hash, &udp_port_rover, scmp); } -inline int ipv4_rcv_saddr_equal(const struct sock *sk1, const struct sock *sk2) +int ipv4_rcv_saddr_equal(const struct sock *sk1, const struct sock *sk2) { struct inet_sock *inet1 = inet_sk(sk1), *inet2 = inet_sk(sk2); @@ -270,10 +271,10 @@ static struct sock *__udp4_lib_lookup(__ continue; score+=2; } - if(score == 9) { + if (score == 9) { result = sk; break; - } else if(score > badness) { + } else if (score > badness) { result = sk; badness = score; } @@ -329,8 +330,8 @@ void __udp4_lib_err(struct sk_buff *skb, struct inet_sock *inet; struct iphdr *iph = (struct iphdr*)skb->data; struct udphdr *uh = (struct udphdr*)(skb->data+(iph->ihl<<2)); - int type = skb->h.icmph->type; - int code = skb->h.icmph->code; + const int type = icmp_hdr(skb)->type; + const int code = icmp_hdr(skb)->code; struct sock *sk; int harderr; int err; @@ -390,7 +391,7 @@ out: sock_put(sk); } -__inline__ void udp_err(struct sk_buff *skb, u32 info) +void udp_err(struct sk_buff *skb, u32 info) { return __udp4_lib_err(skb, info, udp_hash); } @@ -419,13 +420,14 @@ static void udp4_hwcsum_outgoing(struct __be32 src, __be32 dst, int len ) { unsigned int offset; - struct udphdr *uh = skb->h.uh; + struct udphdr *uh = udp_hdr(skb); __wsum csum = 0; if (skb_queue_len(&sk->sk_write_queue) == 1) { /* * Only one fragment on the socket. */ + skb->csum_start = skb_transport_header(skb) - skb->head; skb->csum_offset = offsetof(struct udphdr, check); uh->check = ~csum_tcpudp_magic(src, dst, len, IPPROTO_UDP, 0); } else { @@ -434,7 +436,7 @@ static void udp4_hwcsum_outgoing(struct * fragments on the socket so that all csums of sk_buffs * should be together */ - offset = skb->h.raw - skb->data; + offset = skb_transport_offset(skb); skb->csum = skb_checksum(skb, offset, skb->len - offset, 0); skb->ip_summed = CHECKSUM_NONE; @@ -469,7 +471,7 @@ static int udp_push_pending_frames(struc /* * Create a UDP header */ - uh = skb->h.uh; + uh = udp_hdr(skb); uh->source = fl->fl_ip_sport; uh->dest = fl->fl_ip_dport; uh->len = htons(up->len); @@ -765,38 +767,38 @@ out: int udp_ioctl(struct sock *sk, int cmd, unsigned long arg) { - switch(cmd) + switch (cmd) { + case SIOCOUTQ: { - case SIOCOUTQ: - { - int amount = atomic_read(&sk->sk_wmem_alloc); - return put_user(amount, (int __user *)arg); - } + int amount = atomic_read(&sk->sk_wmem_alloc); + return put_user(amount, (int __user *)arg); + } - case SIOCINQ: - { - struct sk_buff *skb; - unsigned long amount; - - amount = 0; - spin_lock_bh(&sk->sk_receive_queue.lock); - skb = skb_peek(&sk->sk_receive_queue); - if (skb != NULL) { - /* - * We will only return the amount - * of this packet since that is all - * that will be read. - */ - amount = skb->len - sizeof(struct udphdr); - } - spin_unlock_bh(&sk->sk_receive_queue.lock); - return put_user(amount, (int __user *)arg); + case SIOCINQ: + { + struct sk_buff *skb; + unsigned long amount; + + amount = 0; + spin_lock_bh(&sk->sk_receive_queue.lock); + skb = skb_peek(&sk->sk_receive_queue); + if (skb != NULL) { + /* + * We will only return the amount + * of this packet since that is all + * that will be read. + */ + amount = skb->len - sizeof(struct udphdr); } + spin_unlock_bh(&sk->sk_receive_queue.lock); + return put_user(amount, (int __user *)arg); + } - default: - return -ENOIOCTLCMD; + default: + return -ENOIOCTLCMD; } - return(0); + + return 0; } /* @@ -810,7 +812,9 @@ int udp_recvmsg(struct kiocb *iocb, stru struct inet_sock *inet = inet_sk(sk); struct sockaddr_in *sin = (struct sockaddr_in *)msg->msg_name; struct sk_buff *skb; - int copied, err, copy_only, is_udplite = IS_UDPLITE(sk); + unsigned int ulen, copied; + int err; + int is_udplite = IS_UDPLITE(sk); /* * Check any passed addresses @@ -826,28 +830,25 @@ try_again: if (!skb) goto out; - copied = skb->len - sizeof(struct udphdr); - if (copied > len) { - copied = len; + ulen = skb->len - sizeof(struct udphdr); + copied = len; + if (copied > ulen) + copied = ulen; + else if (copied < ulen) msg->msg_flags |= MSG_TRUNC; - } /* - * Decide whether to checksum and/or copy data. - * - * UDP: checksum may have been computed in HW, - * (re-)compute it if message is truncated. - * UDP-Lite: always needs to checksum, no HW support. + * If checksum is needed at all, try to do it while copying the + * data. If the data is truncated, or if we only want a partial + * coverage checksum (UDP-Lite), do it before the copy. */ - copy_only = (skb->ip_summed==CHECKSUM_UNNECESSARY); - if (is_udplite || (!copy_only && msg->msg_flags&MSG_TRUNC)) { - if (__udp_lib_checksum_complete(skb)) + if (copied < ulen || UDP_SKB_CB(skb)->partial_cov) { + if (udp_lib_checksum_complete(skb)) goto csum_copy_err; - copy_only = 1; } - if (copy_only) + if (skb_csum_unnecessary(skb)) err = skb_copy_datagram_iovec(skb, sizeof(struct udphdr), msg->msg_iov, copied ); else { @@ -866,8 +867,8 @@ try_again: if (sin) { sin->sin_family = AF_INET; - sin->sin_port = skb->h.uh->source; - sin->sin_addr.s_addr = skb->nh.iph->saddr; + sin->sin_port = udp_hdr(skb)->source; + sin->sin_addr.s_addr = ip_hdr(skb)->saddr; memset(sin->sin_zero, 0, sizeof(sin->sin_zero)); } if (inet->cmsg_flags) @@ -875,7 +876,7 @@ try_again: err = copied; if (flags & MSG_TRUNC) - err = skb->len - sizeof(struct udphdr); + err = ulen; out_free: skb_free_datagram(sk, skb); @@ -949,7 +950,7 @@ static int udp_encap_rcv(struct sock * s return 1; /* Now we can get the pointers */ - uh = skb->h.uh; + uh = udp_hdr(skb); udpdata = (__u8 *)uh + sizeof(struct udphdr); udpdata32 = (__be32 *)udpdata; @@ -959,7 +960,7 @@ static int udp_encap_rcv(struct sock * s /* Check if this is a keepalive packet. If so, eat it. */ if (len == 1 && udpdata[0] == 0xff) { return 0; - } else if (len > sizeof(struct ip_esp_hdr) && udpdata32[0] != 0 ) { + } else if (len > sizeof(struct ip_esp_hdr) && udpdata32[0] != 0) { /* ESP Packet without Non-ESP header */ len = sizeof(struct udphdr); } else @@ -990,7 +991,7 @@ static int udp_encap_rcv(struct sock * s return 0; /* Now we can update and verify the packet length... */ - iph = skb->nh.iph; + iph = ip_hdr(skb); iphlen = iph->ihl << 2; iph->tot_len = htons(ntohs(iph->tot_len) - len); if (skb->len < iphlen + len) { @@ -1002,7 +1003,8 @@ static int udp_encap_rcv(struct sock * s * transport header to point to ESP. Keep UDP on the stack * for later. */ - skb->h.raw = skb_pull(skb, len); + __skb_pull(skb, len); + skb_reset_transport_header(skb); /* modify the protocol (it's ESP!) */ iph->protocol = IPPROTO_ESP; @@ -1095,10 +1097,9 @@ int udp_queue_rcv_skb(struct sock * sk, } } - if (sk->sk_filter && skb->ip_summed != CHECKSUM_UNNECESSARY) { - if (__udp_lib_checksum_complete(skb)) + if (sk->sk_filter) { + if (udp_lib_checksum_complete(skb)) goto drop; - skb->ip_summed = CHECKSUM_UNNECESSARY; } if ((rc = sock_queue_rcv_skb(sk,skb)) < 0) { @@ -1143,10 +1144,10 @@ static int __udp4_lib_mcast_deliver(stru sknext = udp_v4_mcast_next(sk_next(sk), uh->dest, daddr, uh->source, saddr, dif); - if(sknext) + if (sknext) skb1 = skb_clone(skb, GFP_ATOMIC); - if(skb1) { + if (skb1) { int ret = udp_queue_rcv_skb(sk, skb1); if (ret > 0) /* we should probably re-process instead @@ -1154,7 +1155,7 @@ static int __udp4_lib_mcast_deliver(stru kfree_skb(skb1); } sk = sknext; - } while(sknext); + } while (sknext); } else kfree_skb(skb); read_unlock(&udp_hash_lock); @@ -1166,25 +1167,37 @@ static int __udp4_lib_mcast_deliver(stru * Otherwise, csum completion requires chacksumming packet body, * including udp header and folding it to skb->csum. */ -static inline void udp4_csum_init(struct sk_buff *skb, struct udphdr *uh) +static inline int udp4_csum_init(struct sk_buff *skb, struct udphdr *uh, + int proto) { + const struct iphdr *iph; + int err; + + UDP_SKB_CB(skb)->partial_cov = 0; + UDP_SKB_CB(skb)->cscov = skb->len; + + if (proto == IPPROTO_UDPLITE) { + err = udplite_checksum_init(skb, uh); + if (err) + return err; + } + + iph = ip_hdr(skb); if (uh->check == 0) { skb->ip_summed = CHECKSUM_UNNECESSARY; } else if (skb->ip_summed == CHECKSUM_COMPLETE) { - if (!csum_tcpudp_magic(skb->nh.iph->saddr, skb->nh.iph->daddr, - skb->len, IPPROTO_UDP, skb->csum )) + if (!csum_tcpudp_magic(iph->saddr, iph->daddr, skb->len, + proto, skb->csum)) skb->ip_summed = CHECKSUM_UNNECESSARY; } - if (skb->ip_summed != CHECKSUM_UNNECESSARY) - skb->csum = csum_tcpudp_nofold(skb->nh.iph->saddr, - skb->nh.iph->daddr, - skb->len, IPPROTO_UDP, 0); + if (!skb_csum_unnecessary(skb)) + skb->csum = csum_tcpudp_nofold(iph->saddr, iph->daddr, + skb->len, proto, 0); /* Probably, we should checksum udp header (it should be in cache * in any case) and data in tiny packets (< rx copybreak). */ - /* UDP = UDP-Lite with a non-partial checksum coverage */ - UDP_SKB_CB(skb)->partial_cov = 0; + return 0; } /* @@ -1192,14 +1205,14 @@ static inline void udp4_csum_init(struct */ int __udp4_lib_rcv(struct sk_buff *skb, struct hlist_head udptable[], - int is_udplite) + int proto) { struct sock *sk; - struct udphdr *uh = skb->h.uh; + struct udphdr *uh = udp_hdr(skb); unsigned short ulen; struct rtable *rt = (struct rtable*)skb->dst; - __be32 saddr = skb->nh.iph->saddr; - __be32 daddr = skb->nh.iph->daddr; + __be32 saddr = ip_hdr(skb)->saddr; + __be32 daddr = ip_hdr(skb)->daddr; /* * Validate the packet. @@ -1211,20 +1224,17 @@ int __udp4_lib_rcv(struct sk_buff *skb, if (ulen > skb->len) goto short_packet; - if(! is_udplite ) { /* UDP validates ulen. */ - + if (proto == IPPROTO_UDP) { + /* UDP validates ulen. */ if (ulen < sizeof(*uh) || pskb_trim_rcsum(skb, ulen)) goto short_packet; - uh = skb->h.uh; - - udp4_csum_init(skb, uh); - - } else { /* UDP-Lite validates cscov. */ - if (udplite4_csum_init(skb, uh)) - goto csum_error; + uh = udp_hdr(skb); } - if(rt->rt_flags & (RTCF_BROADCAST|RTCF_MULTICAST)) + if (udp4_csum_init(skb, uh, proto)) + goto csum_error; + + if (rt->rt_flags & (RTCF_BROADCAST|RTCF_MULTICAST)) return __udp4_lib_mcast_deliver(skb, uh, saddr, daddr, udptable); sk = __udp4_lib_lookup(saddr, uh->source, daddr, uh->dest, @@ -1250,7 +1260,7 @@ int __udp4_lib_rcv(struct sk_buff *skb, if (udp_lib_checksum_complete(skb)) goto csum_error; - UDP_INC_STATS_BH(UDP_MIB_NOPORTS, is_udplite); + UDP_INC_STATS_BH(UDP_MIB_NOPORTS, proto == IPPROTO_UDPLITE); icmp_send(skb, ICMP_DEST_UNREACH, ICMP_PORT_UNREACH, 0); /* @@ -1258,11 +1268,11 @@ int __udp4_lib_rcv(struct sk_buff *skb, * don't wanna listen. Ignore it. */ kfree_skb(skb); - return(0); + return 0; short_packet: LIMIT_NETDEBUG(KERN_DEBUG "UDP%s: short packet: From %u.%u.%u.%u:%u %d/%d to %u.%u.%u.%u:%u\n", - is_udplite? "-Lite" : "", + proto == IPPROTO_UDPLITE ? "-Lite" : "", NIPQUAD(saddr), ntohs(uh->source), ulen, @@ -1277,21 +1287,21 @@ csum_error: * the network is concerned, anyway) as per 4.1.3.4 (MUST). */ LIMIT_NETDEBUG(KERN_DEBUG "UDP%s: bad checksum. From %d.%d.%d.%d:%d to %d.%d.%d.%d:%d ulen %d\n", - is_udplite? "-Lite" : "", + proto == IPPROTO_UDPLITE ? "-Lite" : "", NIPQUAD(saddr), ntohs(uh->source), NIPQUAD(daddr), ntohs(uh->dest), ulen); drop: - UDP_INC_STATS_BH(UDP_MIB_INERRORS, is_udplite); + UDP_INC_STATS_BH(UDP_MIB_INERRORS, proto == IPPROTO_UDPLITE); kfree_skb(skb); - return(0); + return 0; } -__inline__ int udp_rcv(struct sk_buff *skb) +int udp_rcv(struct sk_buff *skb) { - return __udp4_lib_rcv(skb, udp_hash, 0); + return __udp4_lib_rcv(skb, udp_hash, IPPROTO_UDP); } int udp_destroy_sock(struct sock *sk) @@ -1313,13 +1323,13 @@ int udp_lib_setsockopt(struct sock *sk, int val; int err = 0; - if(optlencorkflag = 1; @@ -1373,7 +1383,7 @@ int udp_lib_setsockopt(struct sock *sk, default: err = -ENOPROTOOPT; break; - }; + } return err; } @@ -1404,15 +1414,15 @@ int udp_lib_getsockopt(struct sock *sk, struct udp_sock *up = udp_sk(sk); int val, len; - if(get_user(len,optlen)) + if (get_user(len,optlen)) return -EFAULT; len = min_t(unsigned int, len, sizeof(int)); - if(len < 0) + if (len < 0) return -EINVAL; - switch(optname) { + switch (optname) { case UDP_CORK: val = up->corkflag; break; @@ -1433,11 +1443,11 @@ int udp_lib_getsockopt(struct sock *sk, default: return -ENOPROTOOPT; - }; + } - if(put_user(len, optlen)) + if (put_user(len, optlen)) return -EFAULT; - if(copy_to_user(optval, &val,len)) + if (copy_to_user(optval, &val,len)) return -EFAULT; return 0; } @@ -1486,15 +1496,11 @@ unsigned int udp_poll(struct file *file, struct sk_buff *skb; spin_lock_bh(&rcvq->lock); - while ((skb = skb_peek(rcvq)) != NULL) { - if (udp_lib_checksum_complete(skb)) { - UDP_INC_STATS_BH(UDP_MIB_INERRORS, is_lite); - __skb_unlink(skb, rcvq); - kfree_skb(skb); - } else { - skb->ip_summed = CHECKSUM_UNNECESSARY; - break; - } + while ((skb = skb_peek(rcvq)) != NULL && + udp_lib_checksum_complete(skb)) { + UDP_INC_STATS_BH(UDP_MIB_INERRORS, is_lite); + __skb_unlink(skb, rcvq); + kfree_skb(skb); } spin_unlock_bh(&rcvq->lock); @@ -1573,7 +1579,7 @@ static struct sock *udp_get_idx(struct s struct sock *sk = udp_get_first(seq); if (sk) - while(pos && (sk = udp_get_next(seq, sk)) != NULL) + while (pos && (sk = udp_get_next(seq, sk)) != NULL) --pos; return pos ? NULL : sk; } diff -puN net/ipv4/udplite.c~git-net net/ipv4/udplite.c --- a/net/ipv4/udplite.c~git-net +++ a/net/ipv4/udplite.c @@ -31,7 +31,7 @@ static int udplite_v4_get_port(struct so static int udplite_rcv(struct sk_buff *skb) { - return __udp4_lib_rcv(skb, udplite_hash, 1); + return __udp4_lib_rcv(skb, udplite_hash, IPPROTO_UDPLITE); } static void udplite_err(struct sk_buff *skb, u32 info) diff -puN net/ipv4/xfrm4_input.c~git-net net/ipv4/xfrm4_input.c --- a/net/ipv4/xfrm4_input.c~git-net +++ a/net/ipv4/xfrm4_input.c @@ -28,7 +28,7 @@ static int xfrm4_parse_spi(struct sk_buf switch (nexthdr) { case IPPROTO_IPIP: case IPPROTO_IPV6: - *spi = skb->nh.iph->saddr; + *spi = ip_hdr(skb)->saddr; *seq = 0; return 0; } @@ -39,9 +39,9 @@ static int xfrm4_parse_spi(struct sk_buf #ifdef CONFIG_NETFILTER static inline int xfrm4_rcv_encap_finish(struct sk_buff *skb) { - struct iphdr *iph = skb->nh.iph; - if (skb->dst == NULL) { + const struct iphdr *iph = ip_hdr(skb); + if (ip_route_input(skb, iph->daddr, iph->saddr, iph->tos, skb->dev)) goto drop; @@ -55,18 +55,18 @@ drop: int xfrm4_rcv_encap(struct sk_buff *skb, __u16 encap_type) { - int err; __be32 spi, seq; struct xfrm_state *xfrm_vec[XFRM_MAX_DEPTH]; struct xfrm_state *x; int xfrm_nr = 0; int decaps = 0; + int err = xfrm4_parse_spi(skb, ip_hdr(skb)->protocol, &spi, &seq); - if ((err = xfrm4_parse_spi(skb, skb->nh.iph->protocol, &spi, &seq)) != 0) + if (err != 0) goto drop; do { - struct iphdr *iph = skb->nh.iph; + const struct iphdr *iph = ip_hdr(skb); if (xfrm_nr == XFRM_MAX_DEPTH) goto drop; @@ -113,7 +113,8 @@ int xfrm4_rcv_encap(struct sk_buff *skb, break; } - if ((err = xfrm_parse_spi(skb, skb->nh.iph->protocol, &spi, &seq)) < 0) + err = xfrm_parse_spi(skb, ip_hdr(skb)->protocol, &spi, &seq); + if (err < 0) goto drop; } while (!err); @@ -146,15 +147,15 @@ int xfrm4_rcv_encap(struct sk_buff *skb, return 0; } else { #ifdef CONFIG_NETFILTER - __skb_push(skb, skb->data - skb->nh.raw); - skb->nh.iph->tot_len = htons(skb->len); - ip_send_check(skb->nh.iph); + __skb_push(skb, skb->data - skb_network_header(skb)); + ip_hdr(skb)->tot_len = htons(skb->len); + ip_send_check(ip_hdr(skb)); NF_HOOK(PF_INET, NF_IP_PRE_ROUTING, skb, skb->dev, NULL, xfrm4_rcv_encap_finish); return 0; #else - return -skb->nh.iph->protocol; + return -ip_hdr(skb)->protocol; #endif } diff -puN net/ipv4/xfrm4_mode_beet.c~git-net net/ipv4/xfrm4_mode_beet.c --- a/net/ipv4/xfrm4_mode_beet.c~git-net +++ a/net/ipv4/xfrm4_mode_beet.c @@ -29,20 +29,21 @@ */ static int xfrm4_beet_output(struct xfrm_state *x, struct sk_buff *skb) { - struct iphdr *iph, *top_iph = NULL; + struct iphdr *iph, *top_iph; int hdrlen, optlen; - iph = skb->nh.iph; - skb->h.ipiph = iph; + iph = ip_hdr(skb); + skb->transport_header = skb->network_header; hdrlen = 0; optlen = iph->ihl * 4 - sizeof(*iph); if (unlikely(optlen)) hdrlen += IPV4_BEET_PHMAXLEN - (optlen & 4); - skb->nh.raw = skb_push(skb, x->props.header_len + hdrlen); - top_iph = skb->nh.iph; - skb->h.raw += sizeof(*iph) - hdrlen; + skb_push(skb, x->props.header_len - IPV4_BEET_PHMAXLEN + hdrlen); + skb_reset_network_header(skb); + top_iph = ip_hdr(skb); + skb->transport_header += sizeof(*iph) - hdrlen; memmove(top_iph, iph, sizeof(*iph)); if (unlikely(optlen)) { @@ -50,7 +51,7 @@ static int xfrm4_beet_output(struct xfrm BUG_ON(optlen < 0); - ph = (struct ip_beet_phdr *)skb->h.raw; + ph = (struct ip_beet_phdr *)skb_transport_header(skb); ph->padlen = 4 - (optlen & 4); ph->hdrlen = (optlen + ph->padlen + sizeof(*ph)) / 8; ph->nexthdr = top_iph->protocol; @@ -69,20 +70,18 @@ static int xfrm4_beet_output(struct xfrm static int xfrm4_beet_input(struct xfrm_state *x, struct sk_buff *skb) { - struct iphdr *iph = skb->nh.iph; + struct iphdr *iph = ip_hdr(skb); int phlen = 0; int optlen = 0; - __u8 ph_nexthdr = 0, protocol = 0; + u8 ph_nexthdr = 0; int err = -EINVAL; - protocol = iph->protocol; - if (unlikely(iph->protocol == IPPROTO_BEETPH)) { struct ip_beet_phdr *ph; if (!pskb_may_pull(skb, sizeof(*ph))) goto out; - ph = (struct ip_beet_phdr *)(skb->h.ipiph + 1); + ph = (struct ip_beet_phdr *)(ipip_hdr(skb) + 1); phlen = sizeof(*ph) + ph->padlen; optlen = ph->hdrlen * 8 - phlen; @@ -96,22 +95,20 @@ static int xfrm4_beet_input(struct xfrm_ ph_nexthdr = ph->nexthdr; } - skb->nh.raw = skb->data + (phlen - sizeof(*iph)); - memmove(skb->nh.raw, iph, sizeof(*iph)); - skb->h.raw = skb->data + (phlen + optlen); - skb->data = skb->h.raw; + skb_set_network_header(skb, phlen - sizeof(*iph)); + memmove(skb_network_header(skb), iph, sizeof(*iph)); + skb_set_transport_header(skb, phlen + optlen); + skb->data = skb_transport_header(skb); - iph = skb->nh.iph; + iph = ip_hdr(skb); iph->ihl = (sizeof(*iph) + optlen) / 4; iph->tot_len = htons(skb->len + iph->ihl * 4); iph->daddr = x->sel.daddr.a4; iph->saddr = x->sel.saddr.a4; if (ph_nexthdr) iph->protocol = ph_nexthdr; - else - iph->protocol = protocol; iph->check = 0; - iph->check = ip_fast_csum(skb->nh.raw, iph->ihl); + iph->check = ip_fast_csum(skb_network_header(skb), iph->ihl); err = 0; out: return err; diff -puN net/ipv4/xfrm4_mode_transport.c~git-net net/ipv4/xfrm4_mode_transport.c --- a/net/ipv4/xfrm4_mode_transport.c~git-net +++ a/net/ipv4/xfrm4_mode_transport.c @@ -23,16 +23,13 @@ */ static int xfrm4_transport_output(struct xfrm_state *x, struct sk_buff *skb) { - struct iphdr *iph; - int ihl; + struct iphdr *iph = ip_hdr(skb); + int ihl = iph->ihl * 4; - iph = skb->nh.iph; - skb->h.ipiph = iph; - - ihl = iph->ihl * 4; - skb->h.raw += ihl; - - skb->nh.raw = memmove(skb_push(skb, x->props.header_len), iph, ihl); + skb->transport_header = skb->network_header + ihl; + skb_push(skb, x->props.header_len); + skb_reset_network_header(skb); + memmove(skb_network_header(skb), iph, ihl); return 0; } @@ -46,12 +43,15 @@ static int xfrm4_transport_output(struct */ static int xfrm4_transport_input(struct xfrm_state *x, struct sk_buff *skb) { - int ihl = skb->data - skb->h.raw; + int ihl = skb->data - skb_transport_header(skb); - if (skb->h.raw != skb->nh.raw) - skb->nh.raw = memmove(skb->h.raw, skb->nh.raw, ihl); - skb->nh.iph->tot_len = htons(skb->len + ihl); - skb->h.raw = skb->data; + if (skb->transport_header != skb->network_header) { + memmove(skb_transport_header(skb), + skb_network_header(skb), ihl); + skb->network_header = skb->transport_header; + } + ip_hdr(skb)->tot_len = htons(skb->len + ihl); + skb_reset_transport_header(skb); return 0; } diff -puN net/ipv4/xfrm4_mode_tunnel.c~git-net net/ipv4/xfrm4_mode_tunnel.c --- a/net/ipv4/xfrm4_mode_tunnel.c~git-net +++ a/net/ipv4/xfrm4_mode_tunnel.c @@ -16,8 +16,8 @@ static inline void ipip_ecn_decapsulate(struct sk_buff *skb) { - struct iphdr *outer_iph = skb->nh.iph; - struct iphdr *inner_iph = skb->h.ipiph; + struct iphdr *outer_iph = ip_hdr(skb); + struct iphdr *inner_iph = ipip_hdr(skb); if (INET_ECN_is_ce(outer_iph->tos)) IP_ECN_set_ce(inner_iph); @@ -26,7 +26,7 @@ static inline void ipip_ecn_decapsulate( static inline void ipip6_ecn_decapsulate(struct iphdr *iph, struct sk_buff *skb) { if (INET_ECN_is_ce(iph->tos)) - IP6_ECN_set_ce(skb->nh.ipv6h); + IP6_ECN_set_ce(ipv6_hdr(skb)); } /* Add encapsulation header. @@ -46,11 +46,12 @@ static int xfrm4_tunnel_output(struct xf struct iphdr *iph, *top_iph; int flags; - iph = skb->nh.iph; - skb->h.ipiph = iph; + iph = ip_hdr(skb); + skb->transport_header = skb->network_header; - skb->nh.raw = skb_push(skb, x->props.header_len); - top_iph = skb->nh.iph; + skb_push(skb, x->props.header_len); + skb_reset_network_header(skb); + top_iph = ip_hdr(skb); top_iph->ihl = 5; top_iph->version = 4; @@ -90,10 +91,11 @@ static int xfrm4_tunnel_output(struct xf static int xfrm4_tunnel_input(struct xfrm_state *x, struct sk_buff *skb) { - struct iphdr *iph = skb->nh.iph; + struct iphdr *iph = ip_hdr(skb); + const unsigned char *old_mac; int err = -EINVAL; - switch(iph->protocol){ + switch (iph->protocol){ case IPPROTO_IPIP: break; #if defined(CONFIG_IPV6) || defined (CONFIG_IPV6_MODULE) @@ -111,10 +113,10 @@ static int xfrm4_tunnel_input(struct xfr (err = pskb_expand_head(skb, 0, 0, GFP_ATOMIC))) goto out; - iph = skb->nh.iph; + iph = ip_hdr(skb); if (iph->protocol == IPPROTO_IPIP) { if (x->props.flags & XFRM_STATE_DECAP_DSCP) - ipv4_copy_dscp(iph, skb->h.ipiph); + ipv4_copy_dscp(iph, ipip_hdr(skb)); if (!(x->props.flags & XFRM_STATE_NOECN)) ipip_ecn_decapsulate(skb); } @@ -125,9 +127,10 @@ static int xfrm4_tunnel_input(struct xfr skb->protocol = htons(ETH_P_IPV6); } #endif - skb->mac.raw = memmove(skb->data - skb->mac_len, - skb->mac.raw, skb->mac_len); - skb->nh.raw = skb->data; + old_mac = skb_mac_header(skb); + skb_set_mac_header(skb, -skb->mac_len); + memmove(skb_mac_header(skb), old_mac, skb->mac_len); + skb_reset_network_header(skb); err = 0; out: diff -puN net/ipv4/xfrm4_output.c~git-net net/ipv4/xfrm4_output.c --- a/net/ipv4/xfrm4_output.c~git-net +++ a/net/ipv4/xfrm4_output.c @@ -22,14 +22,13 @@ static int xfrm4_tunnel_check_size(struc { int mtu, ret = 0; struct dst_entry *dst; - struct iphdr *iph = skb->nh.iph; if (IPCB(skb)->flags & IPSKB_XFRM_TUNNEL_SIZE) goto out; IPCB(skb)->flags |= IPSKB_XFRM_TUNNEL_SIZE; - if (!(iph->frag_off & htons(IP_DF)) || skb->local_df) + if (!(ip_hdr(skb)->frag_off & htons(IP_DF)) || skb->local_df) goto out; dst = skb->dst; diff -puN net/ipv4/xfrm4_policy.c~git-net net/ipv4/xfrm4_policy.c --- a/net/ipv4/xfrm4_policy.c~git-net +++ a/net/ipv4/xfrm4_policy.c @@ -119,7 +119,7 @@ __xfrm4_bundle_create(struct xfrm_policy if (xfrm[i]->props.mode == XFRM_MODE_TUNNEL) { unsigned short encap_family = xfrm[i]->props.family; - switch(encap_family) { + switch (encap_family) { case AF_INET: fl_tunnel.fl4_dst = xfrm[i]->id.daddr.a4; fl_tunnel.fl4_src = xfrm[i]->props.saddr.a4; @@ -209,8 +209,8 @@ error: static void _decode_session4(struct sk_buff *skb, struct flowi *fl) { - struct iphdr *iph = skb->nh.iph; - u8 *xprth = skb->nh.raw + iph->ihl*4; + struct iphdr *iph = ip_hdr(skb); + u8 *xprth = skb_network_header(skb) + iph->ihl * 4; memset(fl, 0, sizeof(struct flowi)); if (!(iph->frag_off & htons(IP_MF | IP_OFFSET))) { @@ -263,7 +263,7 @@ _decode_session4(struct sk_buff *skb, st default: fl->fl_ipsec_spi = 0; break; - }; + } } fl->proto = iph->protocol; fl->fl4_dst = iph->daddr; diff -puN net/ipv4/xfrm4_tunnel.c~git-net net/ipv4/xfrm4_tunnel.c --- a/net/ipv4/xfrm4_tunnel.c~git-net +++ a/net/ipv4/xfrm4_tunnel.c @@ -12,9 +12,8 @@ static int ipip_output(struct xfrm_state *x, struct sk_buff *skb) { - struct iphdr *iph; + struct iphdr *iph = ip_hdr(skb); - iph = skb->nh.iph; iph->tot_len = htons(skb->len); ip_send_check(iph); diff -puN net/ipv6/Kconfig~git-net net/ipv6/Kconfig --- a/net/ipv6/Kconfig~git-net +++ a/net/ipv6/Kconfig @@ -57,6 +57,16 @@ config IPV6_ROUTE_INFO If unsure, say N. +config IPV6_OPTIMISTIC_DAD + bool "IPv6: Enable RFC 4429 Optimistic DAD (EXPERIMENTAL)" + depends on IPV6 && EXPERIMENTAL + ---help--- + This is experimental support for optimistic Duplicate + Address Detection. It allows for autoconfigured addresses + to be used more quickly. + + If unsure, say N. + config INET6_AH tristate "IPv6: AH transformation" depends on IPV6 diff -puN net/ipv6/Makefile~git-net net/ipv6/Makefile --- a/net/ipv6/Makefile~git-net +++ a/net/ipv6/Makefile @@ -8,7 +8,7 @@ ipv6-objs := af_inet6.o anycast.o ip6_ou route.o ip6_fib.o ipv6_sockglue.o ndisc.o udp.o udplite.o \ raw.o protocol.o icmp.o mcast.o reassembly.o tcp_ipv6.o \ exthdrs.o sysctl_net_ipv6.o datagram.o proc.o \ - ip6_flowlabel.o ipv6_syms.o inet6_connection_sock.o + ip6_flowlabel.o inet6_connection_sock.o ipv6-$(CONFIG_XFRM) += xfrm6_policy.o xfrm6_state.o xfrm6_input.o \ xfrm6_output.o diff -puN net/ipv6/addrconf.c~git-net net/ipv6/addrconf.c --- a/net/ipv6/addrconf.c~git-net +++ a/net/ipv6/addrconf.c @@ -269,6 +269,8 @@ void in6_dev_finish_destroy(struct inet6 call_rcu(&idev->rcu, in6_dev_finish_destroy_rcu); } +EXPORT_SYMBOL(in6_dev_finish_destroy); + static struct inet6_dev * ipv6_add_dev(struct net_device *dev) { struct inet6_dev *ndev; @@ -526,6 +528,16 @@ ipv6_add_addr(struct inet6_dev *idev, co ifa->rt = rt; + /* + * part one of RFC 4429, section 3.3 + * We should not configure an address as + * optimistic if we do not yet know the link + * layer address of our nexhop router + */ + + if (rt->rt6i_nexthop == NULL) + ifa->flags &= ~IFA_F_OPTIMISTIC; + ifa->idev = idev; in6_dev_hold(idev); /* For caller */ @@ -702,6 +714,7 @@ static int ipv6_create_tempaddr(struct i int tmp_plen; int ret = 0; int max_addresses; + u32 addr_flags; write_lock(&idev->lock); if (ift) { @@ -759,10 +772,17 @@ retry: spin_unlock_bh(&ifp->lock); write_unlock(&idev->lock); + + addr_flags = IFA_F_TEMPORARY; + /* set in addrconf_prefix_rcv() */ + if (ifp->flags & IFA_F_OPTIMISTIC) + addr_flags |= IFA_F_OPTIMISTIC; + ift = !max_addresses || ipv6_count_addresses(idev) < max_addresses ? ipv6_add_addr(idev, &addr, tmp_plen, - ipv6_addr_type(&addr)&IPV6_ADDR_SCOPE_MASK, IFA_F_TEMPORARY) : NULL; + ipv6_addr_type(&addr)&IPV6_ADDR_SCOPE_MASK, + addr_flags) : NULL; if (!ift || IS_ERR(ift)) { in6_ifa_put(ifp); in6_dev_put(idev); @@ -894,13 +914,14 @@ int ipv6_dev_get_saddr(struct net_device * - Tentative Address (RFC2462 section 5.4) * - A tentative address is not considered * "assigned to an interface" in the traditional - * sense. + * sense, unless it is also flagged as optimistic. * - Candidate Source Address (section 4) * - In any case, anycast addresses, multicast * addresses, and the unspecified address MUST * NOT be included in a candidate set. */ - if (ifa->flags & IFA_F_TENTATIVE) + if ((ifa->flags & IFA_F_TENTATIVE) && + (!(ifa->flags & IFA_F_OPTIMISTIC))) continue; if (unlikely(score.addr_type == IPV6_ADDR_ANY || score.addr_type & IPV6_ADDR_MULTICAST)) { @@ -959,15 +980,17 @@ int ipv6_dev_get_saddr(struct net_device } } - /* Rule 3: Avoid deprecated address */ + /* Rule 3: Avoid deprecated and optimistic addresses */ if (hiscore.rule < 3) { if (ipv6_saddr_preferred(hiscore.addr_type) || - !(ifa_result->flags & IFA_F_DEPRECATED)) + (((ifa_result->flags & + (IFA_F_DEPRECATED|IFA_F_OPTIMISTIC)) == 0))) hiscore.attrs |= IPV6_SADDR_SCORE_PREFERRED; hiscore.rule++; } if (ipv6_saddr_preferred(score.addr_type) || - !(ifa->flags & IFA_F_DEPRECATED)) { + (((ifa_result->flags & + (IFA_F_DEPRECATED|IFA_F_OPTIMISTIC)) == 0))) { score.attrs |= IPV6_SADDR_SCORE_PREFERRED; if (!(hiscore.attrs & IPV6_SADDR_SCORE_PREFERRED)) { score.rule = 3; @@ -1105,8 +1128,10 @@ int ipv6_get_saddr(struct dst_entry *dst return ipv6_dev_get_saddr(dst ? ip6_dst_idev(dst)->dev : NULL, daddr, saddr); } +EXPORT_SYMBOL(ipv6_get_saddr); -int ipv6_get_lladdr(struct net_device *dev, struct in6_addr *addr) +int ipv6_get_lladdr(struct net_device *dev, struct in6_addr *addr, + unsigned char banned_flags) { struct inet6_dev *idev; int err = -EADDRNOTAVAIL; @@ -1117,7 +1142,7 @@ int ipv6_get_lladdr(struct net_device *d read_lock_bh(&idev->lock); for (ifp=idev->addr_list; ifp; ifp=ifp->if_next) { - if (ifp->scope == IFA_LINK && !(ifp->flags&IFA_F_TENTATIVE)) { + if (ifp->scope == IFA_LINK && !(ifp->flags & banned_flags)) { ipv6_addr_copy(addr, &ifp->addr); err = 0; break; @@ -1159,6 +1184,8 @@ int ipv6_chk_addr(struct in6_addr *addr, return ifp != NULL; } +EXPORT_SYMBOL(ipv6_chk_addr); + static int ipv6_chk_same_addr(const struct in6_addr *addr, struct net_device *dev) { @@ -1667,6 +1694,13 @@ ok: if (ifp == NULL && valid_lft) { int max_addresses = in6_dev->cnf.max_addresses; + u32 addr_flags = 0; + +#ifdef CONFIG_IPV6_OPTIMISTIC_DAD + if (in6_dev->cnf.optimistic_dad && + !ipv6_devconf.forwarding) + addr_flags = IFA_F_OPTIMISTIC; +#endif /* Do not allow to create too much of autoconfigured * addresses; this would be too easy way to crash kernel. @@ -1674,7 +1708,8 @@ ok: if (!max_addresses || ipv6_count_addresses(in6_dev) < max_addresses) ifp = ipv6_add_addr(in6_dev, &addr, pinfo->prefix_len, - addr_type&IPV6_ADDR_SCOPE_MASK, 0); + addr_type&IPV6_ADDR_SCOPE_MASK, + addr_flags); if (!ifp || IS_ERR(ifp)) { in6_dev_put(in6_dev); @@ -1882,6 +1917,11 @@ static int inet6_addr_add(int ifindex, s addrconf_prefix_route(&ifp->addr, ifp->prefix_len, dev, jiffies_to_clock_t(valid_lft * HZ), flags); + /* + * Note that section 3.1 of RFC 4429 indicates + * that the Optimistic flag should not be set for + * manually configured addresses + */ addrconf_dad_start(ifp, 0); in6_ifa_put(ifp); addrconf_verify(0); @@ -2058,8 +2098,16 @@ static void init_loopback(struct net_dev static void addrconf_add_linklocal(struct inet6_dev *idev, struct in6_addr *addr) { struct inet6_ifaddr * ifp; + u32 addr_flags = IFA_F_PERMANENT; + +#ifdef CONFIG_IPV6_OPTIMISTIC_DAD + if (idev->cnf.optimistic_dad && + !ipv6_devconf.forwarding) + addr_flags |= IFA_F_OPTIMISTIC; +#endif - ifp = ipv6_add_addr(idev, addr, 64, IFA_LINK, IFA_F_PERMANENT); + + ifp = ipv6_add_addr(idev, addr, 64, IFA_LINK, addr_flags); if (!IS_ERR(ifp)) { addrconf_prefix_route(&ifp->addr, ifp->prefix_len, idev->dev, 0, 0); addrconf_dad_start(ifp, 0); @@ -2127,7 +2175,7 @@ ipv6_inherit_linklocal(struct inet6_dev { struct in6_addr lladdr; - if (!ipv6_get_lladdr(link_dev, &lladdr)) { + if (!ipv6_get_lladdr(link_dev, &lladdr, IFA_F_TENTATIVE)) { addrconf_add_linklocal(idev, &lladdr); return 0; } @@ -2238,7 +2286,7 @@ static int addrconf_notify(struct notifi default: addrconf_dev_config(dev); break; - }; + } if (idev) { if (run_pending) addrconf_dad_run(idev); @@ -2291,7 +2339,7 @@ static int addrconf_notify(struct notifi } #endif break; - }; + } return NOTIFY_OK; } @@ -2472,7 +2520,11 @@ static void addrconf_dad_kick(struct ine unsigned long rand_num; struct inet6_dev *idev = ifp->idev; - rand_num = net_random() % (idev->cnf.rtr_solicit_delay ? : 1); + if (ifp->flags & IFA_F_OPTIMISTIC) + rand_num = 0; + else + rand_num = net_random() % (idev->cnf.rtr_solicit_delay ? : 1); + ifp->probes = idev->cnf.dad_transmits; addrconf_mod_timer(ifp, AC_DAD, rand_num); } @@ -2494,7 +2546,7 @@ static void addrconf_dad_start(struct in if (dev->flags&(IFF_NOARP|IFF_LOOPBACK) || !(ifp->flags&IFA_F_TENTATIVE) || ifp->flags & IFA_F_NODAD) { - ifp->flags &= ~IFA_F_TENTATIVE; + ifp->flags &= ~(IFA_F_TENTATIVE|IFA_F_OPTIMISTIC); spin_unlock_bh(&ifp->lock); read_unlock_bh(&idev->lock); @@ -2514,6 +2566,14 @@ static void addrconf_dad_start(struct in addrconf_dad_stop(ifp); return; } + + /* + * Optimistic nodes can start receiving + * Frames right away + */ + if(ifp->flags & IFA_F_OPTIMISTIC) + ip6_ins_rt(ifp->rt); + addrconf_dad_kick(ifp); spin_unlock_bh(&ifp->lock); out: @@ -2538,7 +2598,7 @@ static void addrconf_dad_timer(unsigned * DAD was successful */ - ifp->flags &= ~IFA_F_TENTATIVE; + ifp->flags &= ~(IFA_F_TENTATIVE|IFA_F_OPTIMISTIC); spin_unlock_bh(&ifp->lock); read_unlock_bh(&idev->lock); @@ -3162,7 +3222,6 @@ static int inet6_dump_addr(struct sk_buf s_idx = cb->args[0]; s_ip_idx = ip_idx = cb->args[1]; - read_lock(&dev_base_lock); for (dev = dev_base, idx = 0; dev; dev = dev->next, idx++) { if (idx < s_idx) @@ -3224,7 +3283,6 @@ done: read_unlock_bh(&idev->lock); in6_dev_put(idev); } - read_unlock(&dev_base_lock); cb->args[0] = idx; cb->args[1] = ip_idx; return skb->len; @@ -3356,6 +3414,9 @@ static inline void ipv6_store_devconf(st #endif #endif array[DEVCONF_PROXY_NDP] = cnf->proxy_ndp; +#ifdef CONFIG_IPV6_OPTIMISTIC_DAD + array[DEVCONF_OPTIMISTIC_DAD] = cnf->optimistic_dad; +#endif } static inline size_t inet6_if_nlmsg_size(void) @@ -3369,6 +3430,8 @@ static inline size_t inet6_if_nlmsg_size nla_total_size(4) /* IFLA_INET6_FLAGS */ + nla_total_size(sizeof(struct ifla_cacheinfo)) + nla_total_size(DEVCONF_MAX * 4) /* IFLA_INET6_CONF */ + + nla_total_size(IPSTATS_MIB_MAX * 8) /* IFLA_INET6_STATS */ + + nla_total_size(ICMP6_MIB_MAX * 8) /* IFLA_INET6_ICMP6STATS */ ); } @@ -3376,7 +3439,7 @@ static int inet6_fill_ifinfo(struct sk_b u32 pid, u32 seq, int event, unsigned int flags) { struct net_device *dev = idev->dev; - struct nlattr *conf; + struct nlattr *nla; struct ifinfomsg *hdr; struct nlmsghdr *nlh; void *protoinfo; @@ -3416,12 +3479,22 @@ static int inet6_fill_ifinfo(struct sk_b ci.retrans_time = idev->nd_parms->retrans_time; NLA_PUT(skb, IFLA_INET6_CACHEINFO, sizeof(ci), &ci); - conf = nla_reserve(skb, IFLA_INET6_CONF, DEVCONF_MAX * sizeof(s32)); - if (conf == NULL) + nla = nla_reserve(skb, IFLA_INET6_CONF, DEVCONF_MAX * sizeof(s32)); + if (nla == NULL) goto nla_put_failure; - ipv6_store_devconf(&idev->cnf, nla_data(conf), nla_len(conf)); + ipv6_store_devconf(&idev->cnf, nla_data(nla), nla_len(nla)); - /* XXX - Statistics/MC not implemented */ + /* XXX - MC not implemented */ + + nla = nla_reserve(skb, IFLA_INET6_STATS, IPSTATS_MIB_MAX * sizeof(u64)); + if (nla == NULL) + goto nla_put_failure; + snmp6_fill_stats(nla_data(nla), idev, IFLA_INET6_STATS, nla_len(nla)); + + nla = nla_reserve(skb, IFLA_INET6_ICMP6STATS, ICMP6_MIB_MAX * sizeof(u64)); + if (nla == NULL) + goto nla_put_failure; + snmp6_fill_stats(nla_data(nla), idev, IFLA_INET6_ICMP6STATS, nla_len(nla)); nla_nest_end(skb, protoinfo); return nlmsg_end(skb, nlh); @@ -3547,30 +3620,20 @@ errout: rtnl_set_sk_err(RTNLGRP_IPV6_PREFIX, err); } -static struct rtnetlink_link inet6_rtnetlink_table[RTM_NR_MSGTYPES] = { - [RTM_GETLINK - RTM_BASE] = { .dumpit = inet6_dump_ifinfo, }, - [RTM_NEWADDR - RTM_BASE] = { .doit = inet6_rtm_newaddr, }, - [RTM_DELADDR - RTM_BASE] = { .doit = inet6_rtm_deladdr, }, - [RTM_GETADDR - RTM_BASE] = { .doit = inet6_rtm_getaddr, - .dumpit = inet6_dump_ifaddr, }, - [RTM_GETMULTICAST - RTM_BASE] = { .dumpit = inet6_dump_ifmcaddr, }, - [RTM_GETANYCAST - RTM_BASE] = { .dumpit = inet6_dump_ifacaddr, }, - [RTM_NEWROUTE - RTM_BASE] = { .doit = inet6_rtm_newroute, }, - [RTM_DELROUTE - RTM_BASE] = { .doit = inet6_rtm_delroute, }, - [RTM_GETROUTE - RTM_BASE] = { .doit = inet6_rtm_getroute, - .dumpit = inet6_dump_fib, }, -#ifdef CONFIG_IPV6_MULTIPLE_TABLES - [RTM_GETRULE - RTM_BASE] = { .dumpit = fib6_rules_dump, }, -#endif -}; - static void __ipv6_ifa_notify(int event, struct inet6_ifaddr *ifp) { inet6_ifa_notify(event ? : RTM_NEWADDR, ifp); switch (event) { case RTM_NEWADDR: - ip6_ins_rt(ifp->rt); + /* + * If the address was optimistic + * we inserted the route at the start of + * our DAD process, so we don't need + * to do it again + */ + if (!(ifp->rt->rt6i_node)) + ip6_ins_rt(ifp->rt); if (ifp->idev->cnf.forwarding) addrconf_join_anycast(ifp); break; @@ -3883,6 +3946,17 @@ static struct addrconf_sysctl_table .mode = 0644, .proc_handler = &proc_dointvec, }, +#ifdef CONFIG_IPV6_OPTIMISTIC_DAD + { + .ctl_name = NET_IPV6_OPTIMISTIC_DAD, + .procname = "optimistic_dad", + .data = &ipv6_devconf.optimistic_dad, + .maxlen = sizeof(int), + .mode = 0644, + .proc_handler = &proc_dointvec, + + }, +#endif { .ctl_name = 0, /* sentinel */ } @@ -4010,11 +4084,15 @@ int register_inet6addr_notifier(struct n return atomic_notifier_chain_register(&inet6addr_chain, nb); } +EXPORT_SYMBOL(register_inet6addr_notifier); + int unregister_inet6addr_notifier(struct notifier_block *nb) { return atomic_notifier_chain_unregister(&inet6addr_chain,nb); } +EXPORT_SYMBOL(unregister_inet6addr_notifier); + /* * Init / cleanup code */ @@ -4053,7 +4131,18 @@ int __init addrconf_init(void) register_netdevice_notifier(&ipv6_dev_notf); addrconf_verify(0); - rtnetlink_links[PF_INET6] = inet6_rtnetlink_table; + + err = __rtnl_register(PF_INET6, RTM_GETLINK, NULL, inet6_dump_ifinfo); + if (err < 0) + goto errout; + + /* Only the first call to __rtnl_register can fail */ + __rtnl_register(PF_INET6, RTM_NEWADDR, inet6_rtm_newaddr, NULL); + __rtnl_register(PF_INET6, RTM_DELADDR, inet6_rtm_deladdr, NULL); + __rtnl_register(PF_INET6, RTM_GETADDR, inet6_rtm_getaddr, inet6_dump_ifaddr); + __rtnl_register(PF_INET6, RTM_GETMULTICAST, NULL, inet6_dump_ifmcaddr); + __rtnl_register(PF_INET6, RTM_GETANYCAST, NULL, inet6_dump_ifacaddr); + #ifdef CONFIG_SYSCTL addrconf_sysctl.sysctl_header = register_sysctl_table(addrconf_sysctl.addrconf_root_dir); @@ -4061,6 +4150,10 @@ int __init addrconf_init(void) #endif return 0; +errout: + unregister_netdevice_notifier(&ipv6_dev_notf); + + return err; } void __exit addrconf_cleanup(void) @@ -4072,7 +4165,6 @@ void __exit addrconf_cleanup(void) unregister_netdevice_notifier(&ipv6_dev_notf); - rtnetlink_links[PF_INET6] = NULL; #ifdef CONFIG_SYSCTL addrconf_sysctl_unregister(&ipv6_devconf_dflt); addrconf_sysctl_unregister(&ipv6_devconf); diff -puN net/ipv6/af_inet6.c~git-net net/ipv6/af_inet6.c --- a/net/ipv6/af_inet6.c~git-net +++ a/net/ipv6/af_inet6.c @@ -98,6 +98,11 @@ static int inet6_create(struct socket *s int try_loading_module = 0; int err; + if (sock->type != SOCK_RAW && + sock->type != SOCK_DGRAM && + !inet_ehash_secret) + build_ehash_secret(); + /* Look for the requested type/protocol pair. */ answer = NULL; lookup_protocol: @@ -349,6 +354,8 @@ out: return err; } +EXPORT_SYMBOL(inet6_bind); + int inet6_release(struct socket *sock) { struct sock *sk = sock->sk; @@ -365,6 +372,8 @@ int inet6_release(struct socket *sock) return inet_release(sock); } +EXPORT_SYMBOL(inet6_release); + int inet6_destroy_sock(struct sock *sk) { struct ipv6_pinfo *np = inet6_sk(sk); @@ -428,6 +437,8 @@ int inet6_getname(struct socket *sock, s return(0); } +EXPORT_SYMBOL(inet6_getname); + int inet6_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg) { struct sock *sk = sock->sk; @@ -437,6 +448,9 @@ int inet6_ioctl(struct socket *sock, uns case SIOCGSTAMP: return sock_get_timestamp(sk, (struct timeval __user *)arg); + case SIOCGSTAMPNS: + return sock_get_timestampns(sk, (struct timespec __user *)arg); + case SIOCADDRT: case SIOCDELRT: @@ -457,6 +471,8 @@ int inet6_ioctl(struct socket *sock, uns return(0); } +EXPORT_SYMBOL(inet6_ioctl); + const struct proto_ops inet6_stream_ops = { .family = PF_INET6, .owner = THIS_MODULE, @@ -603,6 +619,8 @@ out_illegal: goto out; } +EXPORT_SYMBOL(inet6_register_protosw); + void inet6_unregister_protosw(struct inet_protosw *p) { @@ -619,6 +637,8 @@ inet6_unregister_protosw(struct inet_pro } } +EXPORT_SYMBOL(inet6_unregister_protosw); + int inet6_sk_rebuild_header(struct sock *sk) { int err; @@ -678,7 +698,8 @@ int ipv6_opt_accepted(struct sock *sk, s if (np->rxopt.all) { if ((opt->hop && (np->rxopt.bits.hopopts || np->rxopt.bits.ohopopts)) || - ((IPV6_FLOWINFO_MASK & *(__be32*)skb->nh.raw) && + ((IPV6_FLOWINFO_MASK & + *(__be32 *)skb_network_header(skb)) && np->rxopt.bits.rxflow) || (opt->srcrt && (np->rxopt.bits.srcrt || np->rxopt.bits.osrcrt)) || @@ -691,39 +712,6 @@ int ipv6_opt_accepted(struct sock *sk, s EXPORT_SYMBOL_GPL(ipv6_opt_accepted); -int -snmp6_mib_init(void *ptr[2], size_t mibsize, size_t mibalign) -{ - if (ptr == NULL) - return -EINVAL; - - ptr[0] = __alloc_percpu(mibsize); - if (!ptr[0]) - goto err0; - - ptr[1] = __alloc_percpu(mibsize); - if (!ptr[1]) - goto err1; - - return 0; - -err1: - free_percpu(ptr[0]); - ptr[0] = NULL; -err0: - return -ENOMEM; -} - -void -snmp6_mib_free(void *ptr[2]) -{ - if (ptr == NULL) - return; - free_percpu(ptr[0]); - free_percpu(ptr[1]); - ptr[0] = ptr[1] = NULL; -} - static int __init init_ipv6_mibs(void) { if (snmp6_mib_init((void **)ipv6_statistics, sizeof (struct ipstats_mib), @@ -929,6 +917,8 @@ static void __exit inet6_exit(void) { /* First of all disallow new sockets creation. */ sock_unregister(PF_INET6); + /* Disallow any further netlink messages */ + rtnl_unregister_all(PF_INET6); /* Cleanup code parts. */ ipv6_packet_cleanup(); diff -puN net/ipv6/ah6.c~git-net net/ipv6/ah6.c --- a/net/ipv6/ah6.c~git-net +++ a/net/ipv6/ah6.c @@ -238,8 +238,8 @@ static int ah6_output(struct xfrm_state top_iph = (struct ipv6hdr *)skb->data; top_iph->payload_len = htons(skb->len - sizeof(*top_iph)); - nexthdr = *skb->nh.raw; - *skb->nh.raw = IPPROTO_AH; + nexthdr = *skb_network_header(skb); + *skb_network_header(skb) = IPPROTO_AH; /* When there are no extension headers, we only need to save the first * 8 bytes of the base IP header. @@ -247,7 +247,7 @@ static int ah6_output(struct xfrm_state memcpy(tmp_base, top_iph, sizeof(tmp_base)); tmp_ext = NULL; - extlen = skb->h.raw - (unsigned char *)(top_iph + 1); + extlen = skb_transport_offset(skb) + sizeof(struct ipv6hdr); if (extlen) { extlen += sizeof(*tmp_ext); tmp_ext = kmalloc(extlen, GFP_ATOMIC); @@ -268,7 +268,7 @@ static int ah6_output(struct xfrm_state goto error_free_iph; } - ah = (struct ip_auth_hdr *)skb->h.raw; + ah = (struct ip_auth_hdr *)skb_transport_header(skb); ah->nexthdr = nexthdr; top_iph->priority = 0; @@ -316,8 +316,8 @@ static int ah6_input(struct xfrm_state * * * To erase AH: * Keeping copy of cleared headers. After AH processing, - * Moving the pointer of skb->nh.raw by using skb_pull as long as AH - * header length. Then copy back the copy as long as hdr_len + * Moving the pointer of skb->network_header by using skb_pull as long + * as AH header length. Then copy back the copy as long as hdr_len * If destination header following AH exists, copy it into after [Ext2]. * * |<>|[IPv6][Ext1][Ext2][Dest][Payload] @@ -325,6 +325,7 @@ static int ah6_input(struct xfrm_state * */ struct ipv6_auth_hdr *ah; + struct ipv6hdr *ip6h; struct ah_data *ahp; unsigned char *tmp_hdr = NULL; u16 hdr_len; @@ -341,7 +342,7 @@ static int ah6_input(struct xfrm_state * pskb_expand_head(skb, 0, 0, GFP_ATOMIC)) goto out; - hdr_len = skb->data - skb->nh.raw; + hdr_len = skb->data - skb_network_header(skb); ah = (struct ipv6_auth_hdr*)skb->data; ahp = x->data; nexthdr = ah->nexthdr; @@ -354,16 +355,17 @@ static int ah6_input(struct xfrm_state * if (!pskb_may_pull(skb, ah_hlen)) goto out; - tmp_hdr = kmemdup(skb->nh.raw, hdr_len, GFP_ATOMIC); + tmp_hdr = kmemdup(skb_network_header(skb), hdr_len, GFP_ATOMIC); if (!tmp_hdr) goto out; - if (ipv6_clear_mutable_options(skb->nh.ipv6h, hdr_len, XFRM_POLICY_IN)) + ip6h = ipv6_hdr(skb); + if (ipv6_clear_mutable_options(ip6h, hdr_len, XFRM_POLICY_IN)) goto free_out; - skb->nh.ipv6h->priority = 0; - skb->nh.ipv6h->flow_lbl[0] = 0; - skb->nh.ipv6h->flow_lbl[1] = 0; - skb->nh.ipv6h->flow_lbl[2] = 0; - skb->nh.ipv6h->hop_limit = 0; + ip6h->priority = 0; + ip6h->flow_lbl[0] = 0; + ip6h->flow_lbl[1] = 0; + ip6h->flow_lbl[2] = 0; + ip6h->hop_limit = 0; { u8 auth_data[MAX_AH_AUTH_LEN]; @@ -382,7 +384,9 @@ static int ah6_input(struct xfrm_state * } } - skb->h.raw = memcpy(skb->nh.raw += ah_hlen, tmp_hdr, hdr_len); + skb->network_header += ah_hlen; + memcpy(skb_network_header(skb), tmp_hdr, hdr_len); + skb->transport_header = skb->network_header; __skb_pull(skb, ah_hlen + hdr_len); kfree(tmp_hdr); diff -puN net/ipv6/datagram.c~git-net net/ipv6/datagram.c --- a/net/ipv6/datagram.c~git-net +++ a/net/ipv6/datagram.c @@ -209,7 +209,7 @@ void ipv6_icmp_error(struct sock *sk, st __be16 port, u32 info, u8 *payload) { struct ipv6_pinfo *np = inet6_sk(sk); - struct icmp6hdr *icmph = (struct icmp6hdr *)skb->h.raw; + struct icmp6hdr *icmph = icmp6_hdr(skb); struct sock_exterr_skb *serr; if (!np->recverr) @@ -227,11 +227,12 @@ void ipv6_icmp_error(struct sock *sk, st serr->ee.ee_pad = 0; serr->ee.ee_info = info; serr->ee.ee_data = 0; - serr->addr_offset = (u8*)&(((struct ipv6hdr*)(icmph+1))->daddr) - skb->nh.raw; + serr->addr_offset = (u8 *)&(((struct ipv6hdr *)(icmph + 1))->daddr) - + skb_network_header(skb); serr->port = port; - skb->h.raw = payload; __skb_pull(skb, payload - skb->data); + skb_reset_transport_header(skb); if (sock_queue_err_skb(sk, skb)) kfree_skb(skb); @@ -251,8 +252,9 @@ void ipv6_local_error(struct sock *sk, i if (!skb) return; - iph = (struct ipv6hdr*)skb_put(skb, sizeof(struct ipv6hdr)); - skb->nh.ipv6h = iph; + skb_put(skb, sizeof(struct ipv6hdr)); + skb_reset_network_header(skb); + iph = ipv6_hdr(skb); ipv6_addr_copy(&iph->daddr, &fl->fl6_dst); serr = SKB_EXT_ERR(skb); @@ -263,11 +265,11 @@ void ipv6_local_error(struct sock *sk, i serr->ee.ee_pad = 0; serr->ee.ee_info = info; serr->ee.ee_data = 0; - serr->addr_offset = (u8*)&iph->daddr - skb->nh.raw; + serr->addr_offset = (u8 *)&iph->daddr - skb_network_header(skb); serr->port = fl->fl_ip_dport; - skb->h.raw = skb->tail; - __skb_pull(skb, skb->tail - skb->data); + __skb_pull(skb, skb_tail_pointer(skb) - skb->data); + skb_reset_transport_header(skb); if (sock_queue_err_skb(sk, skb)) kfree_skb(skb); @@ -309,21 +311,24 @@ int ipv6_recv_error(struct sock *sk, str sin = (struct sockaddr_in6 *)msg->msg_name; if (sin) { + const unsigned char *nh = skb_network_header(skb); sin->sin6_family = AF_INET6; sin->sin6_flowinfo = 0; sin->sin6_port = serr->port; sin->sin6_scope_id = 0; if (serr->ee.ee_origin == SO_EE_ORIGIN_ICMP6) { ipv6_addr_copy(&sin->sin6_addr, - (struct in6_addr *)(skb->nh.raw + serr->addr_offset)); + (struct in6_addr *)(nh + serr->addr_offset)); if (np->sndflow) - sin->sin6_flowinfo = *(__be32*)(skb->nh.raw + serr->addr_offset - 24) & IPV6_FLOWINFO_MASK; + sin->sin6_flowinfo = + (*(__be32 *)(nh + serr->addr_offset - 24) & + IPV6_FLOWINFO_MASK); if (ipv6_addr_type(&sin->sin6_addr) & IPV6_ADDR_LINKLOCAL) sin->sin6_scope_id = IP6CB(skb)->iif; } else { ipv6_addr_set(&sin->sin6_addr, 0, 0, htonl(0xffff), - *(__be32*)(skb->nh.raw + serr->addr_offset)); + *(__be32 *)(nh + serr->addr_offset)); } } @@ -335,7 +340,7 @@ int ipv6_recv_error(struct sock *sk, str sin->sin6_flowinfo = 0; sin->sin6_scope_id = 0; if (serr->ee.ee_origin == SO_EE_ORIGIN_ICMP6) { - ipv6_addr_copy(&sin->sin6_addr, &skb->nh.ipv6h->saddr); + ipv6_addr_copy(&sin->sin6_addr, &ipv6_hdr(skb)->saddr); if (np->rxopt.all) datagram_recv_ctl(sk, msg, skb); if (ipv6_addr_type(&sin->sin6_addr) & IPV6_ADDR_LINKLOCAL) @@ -344,8 +349,7 @@ int ipv6_recv_error(struct sock *sk, str struct inet_sock *inet = inet_sk(sk); ipv6_addr_set(&sin->sin6_addr, 0, 0, - htonl(0xffff), - skb->nh.iph->saddr); + htonl(0xffff), ip_hdr(skb)->saddr); if (inet->cmsg_flags) ip_cmsg_recv(msg, skb); } @@ -381,33 +385,34 @@ int datagram_recv_ctl(struct sock *sk, s { struct ipv6_pinfo *np = inet6_sk(sk); struct inet6_skb_parm *opt = IP6CB(skb); + unsigned char *nh = skb_network_header(skb); if (np->rxopt.bits.rxinfo) { struct in6_pktinfo src_info; src_info.ipi6_ifindex = opt->iif; - ipv6_addr_copy(&src_info.ipi6_addr, &skb->nh.ipv6h->daddr); + ipv6_addr_copy(&src_info.ipi6_addr, &ipv6_hdr(skb)->daddr); put_cmsg(msg, SOL_IPV6, IPV6_PKTINFO, sizeof(src_info), &src_info); } if (np->rxopt.bits.rxhlim) { - int hlim = skb->nh.ipv6h->hop_limit; + int hlim = ipv6_hdr(skb)->hop_limit; put_cmsg(msg, SOL_IPV6, IPV6_HOPLIMIT, sizeof(hlim), &hlim); } if (np->rxopt.bits.rxtclass) { - int tclass = (ntohl(*(__be32 *)skb->nh.ipv6h) >> 20) & 0xff; + int tclass = (ntohl(*(__be32 *)ipv6_hdr(skb)) >> 20) & 0xff; put_cmsg(msg, SOL_IPV6, IPV6_TCLASS, sizeof(tclass), &tclass); } - if (np->rxopt.bits.rxflow && (*(__be32*)skb->nh.raw & IPV6_FLOWINFO_MASK)) { - __be32 flowinfo = *(__be32*)skb->nh.raw & IPV6_FLOWINFO_MASK; + if (np->rxopt.bits.rxflow && (*(__be32 *)nh & IPV6_FLOWINFO_MASK)) { + __be32 flowinfo = *(__be32 *)nh & IPV6_FLOWINFO_MASK; put_cmsg(msg, SOL_IPV6, IPV6_FLOWINFO, sizeof(flowinfo), &flowinfo); } /* HbH is allowed only once */ if (np->rxopt.bits.hopopts && opt->hop) { - u8 *ptr = skb->nh.raw + opt->hop; + u8 *ptr = nh + opt->hop; put_cmsg(msg, SOL_IPV6, IPV6_HOPOPTS, (ptr[1]+1)<<3, ptr); } @@ -423,11 +428,11 @@ int datagram_recv_ctl(struct sock *sk, s * IPV6_RECVDSTOPTS is more generic. --yoshfuji */ unsigned int off = sizeof(struct ipv6hdr); - u8 nexthdr = skb->nh.ipv6h->nexthdr; + u8 nexthdr = ipv6_hdr(skb)->nexthdr; while (off <= opt->lastopt) { unsigned len; - u8 *ptr = skb->nh.raw + off; + u8 *ptr = nh + off; switch(nexthdr) { case IPPROTO_DSTOPTS: @@ -461,27 +466,27 @@ int datagram_recv_ctl(struct sock *sk, s struct in6_pktinfo src_info; src_info.ipi6_ifindex = opt->iif; - ipv6_addr_copy(&src_info.ipi6_addr, &skb->nh.ipv6h->daddr); + ipv6_addr_copy(&src_info.ipi6_addr, &ipv6_hdr(skb)->daddr); put_cmsg(msg, SOL_IPV6, IPV6_2292PKTINFO, sizeof(src_info), &src_info); } if (np->rxopt.bits.rxohlim) { - int hlim = skb->nh.ipv6h->hop_limit; + int hlim = ipv6_hdr(skb)->hop_limit; put_cmsg(msg, SOL_IPV6, IPV6_2292HOPLIMIT, sizeof(hlim), &hlim); } if (np->rxopt.bits.ohopopts && opt->hop) { - u8 *ptr = skb->nh.raw + opt->hop; + u8 *ptr = nh + opt->hop; put_cmsg(msg, SOL_IPV6, IPV6_2292HOPOPTS, (ptr[1]+1)<<3, ptr); } if (np->rxopt.bits.odstopts && opt->dst0) { - u8 *ptr = skb->nh.raw + opt->dst0; + u8 *ptr = nh + opt->dst0; put_cmsg(msg, SOL_IPV6, IPV6_2292DSTOPTS, (ptr[1]+1)<<3, ptr); } if (np->rxopt.bits.osrcrt && opt->srcrt) { - struct ipv6_rt_hdr *rthdr = (struct ipv6_rt_hdr *)(skb->nh.raw + opt->srcrt); + struct ipv6_rt_hdr *rthdr = (struct ipv6_rt_hdr *)(nh + opt->srcrt); put_cmsg(msg, SOL_IPV6, IPV6_2292RTHDR, (rthdr->hdrlen+1) << 3, rthdr); } if (np->rxopt.bits.odstopts && opt->dst1) { - u8 *ptr = skb->nh.raw + opt->dst1; + u8 *ptr = nh + opt->dst1; put_cmsg(msg, SOL_IPV6, IPV6_2292DSTOPTS, (ptr[1]+1)<<3, ptr); } return 0; @@ -718,7 +723,7 @@ int datagram_send_ctl(struct msghdr *msg cmsg->cmsg_type); err = -EINVAL; break; - }; + } } exit_f: diff -puN net/ipv6/esp6.c~git-net net/ipv6/esp6.c --- a/net/ipv6/esp6.c~git-net +++ a/net/ipv6/esp6.c @@ -42,21 +42,19 @@ static int esp6_output(struct xfrm_state *x, struct sk_buff *skb) { int err; - int hdr_len; struct ipv6hdr *top_iph; struct ipv6_esp_hdr *esph; struct crypto_blkcipher *tfm; struct blkcipher_desc desc; - struct esp_data *esp; struct sk_buff *trailer; int blksize; int clen; int alen; int nfrags; - - esp = x->data; - hdr_len = skb->h.raw - skb->data + - sizeof(*esph) + esp->conf.ivlen; + u8 *tail; + struct esp_data *esp = x->data; + int hdr_len = (skb_transport_offset(skb) + + sizeof(*esph) + esp->conf.ivlen); /* Strip IP+ESP header. */ __skb_pull(skb, hdr_len); @@ -81,19 +79,20 @@ static int esp6_output(struct xfrm_state } /* Fill padding... */ + tail = skb_tail_pointer(trailer); do { int i; for (i=0; ilen - 2; i++) - *(u8*)(trailer->tail + i) = i+1; + tail[i] = i + 1; } while (0); - *(u8*)(trailer->tail + clen-skb->len - 2) = (clen - skb->len)-2; + tail[clen-skb->len - 2] = (clen - skb->len) - 2; pskb_put(skb, trailer, clen - skb->len); top_iph = (struct ipv6hdr *)__skb_push(skb, hdr_len); - esph = (struct ipv6_esp_hdr *)skb->h.raw; + esph = (struct ipv6_esp_hdr *)skb_transport_header(skb); top_iph->payload_len = htons(skb->len + alen - sizeof(*top_iph)); - *(u8*)(trailer->tail - 1) = *skb->nh.raw; - *skb->nh.raw = IPPROTO_ESP; + *(skb_tail_pointer(trailer) - 1) = *skb_network_header(skb); + *skb_network_header(skb) = IPPROTO_ESP; esph->spi = x->id.spi; esph->seq_no = htonl(++x->replay.oseq); @@ -150,8 +149,7 @@ static int esp6_input(struct xfrm_state int blksize = ALIGN(crypto_blkcipher_blocksize(tfm), 4); int alen = esp->auth.icv_trunc_len; int elen = skb->len - sizeof(struct ipv6_esp_hdr) - esp->conf.ivlen - alen; - - int hdr_len = skb->h.raw - skb->nh.raw; + int hdr_len = skb_network_header_len(skb); int nfrags; int ret = 0; @@ -191,7 +189,7 @@ static int esp6_input(struct xfrm_state skb->ip_summed = CHECKSUM_NONE; esph = (struct ipv6_esp_hdr*)skb->data; - iph = skb->nh.ipv6h; + iph = ipv6_hdr(skb); /* Get ivec. This can be wrong, check against another impls. */ if (esp->conf.ivlen) @@ -231,28 +229,30 @@ static int esp6_input(struct xfrm_state ret = nexthdr[1]; } - skb->h.raw = __skb_pull(skb, sizeof(*esph) + esp->conf.ivlen) - hdr_len; - + __skb_pull(skb, sizeof(*esph) + esp->conf.ivlen); + skb_set_transport_header(skb, -hdr_len); out: return ret; } -static u32 esp6_get_max_size(struct xfrm_state *x, int mtu) +static u32 esp6_get_mtu(struct xfrm_state *x, int mtu) { struct esp_data *esp = x->data; u32 blksize = ALIGN(crypto_blkcipher_blocksize(esp->conf.tfm), 4); + u32 align = max_t(u32, blksize, esp->conf.padlen); + u32 rem; + + mtu -= x->props.header_len + esp->auth.icv_trunc_len; + rem = mtu & (align - 1); + mtu &= ~(align - 1); - if (x->props.mode == XFRM_MODE_TUNNEL) { - mtu = ALIGN(mtu + 2, blksize); - } else { - /* The worst case. */ + if (x->props.mode != XFRM_MODE_TUNNEL) { u32 padsize = ((blksize - 1) & 7) + 1; - mtu = ALIGN(mtu + 2, padsize) + blksize - padsize; + mtu -= blksize - padsize; + mtu += min_t(u32, blksize - padsize, rem); } - if (esp->conf.padlen) - mtu = ALIGN(mtu, esp->conf.padlen); - return mtu + x->props.header_len + esp->auth.icv_trunc_len; + return mtu - 2; } static void esp6_err(struct sk_buff *skb, struct inet6_skb_parm *opt, @@ -382,7 +382,7 @@ static struct xfrm_type esp6_type = .proto = IPPROTO_ESP, .init_state = esp6_init_state, .destructor = esp6_destroy, - .get_max_size = esp6_get_max_size, + .get_mtu = esp6_get_mtu, .input = esp6_input, .output = esp6_output, .hdr_offset = xfrm6_find_1stfragopt, diff -puN net/ipv6/exthdrs.c~git-net net/ipv6/exthdrs.c --- a/net/ipv6/exthdrs.c~git-net +++ a/net/ipv6/exthdrs.c @@ -50,13 +50,14 @@ int ipv6_find_tlv(struct sk_buff *skb, int offset, int type) { - int packet_len = skb->tail - skb->nh.raw; + const unsigned char *nh = skb_network_header(skb); + int packet_len = skb->tail - skb->network_header; struct ipv6_opt_hdr *hdr; int len; if (offset + 2 > packet_len) goto bad; - hdr = (struct ipv6_opt_hdr*)(skb->nh.raw + offset); + hdr = (struct ipv6_opt_hdr *)(nh + offset); len = ((hdr->hdrlen + 1) << 3); if (offset + len > packet_len) @@ -66,7 +67,7 @@ int ipv6_find_tlv(struct sk_buff *skb, i len -= 2; while (len > 0) { - int opttype = skb->nh.raw[offset]; + int opttype = nh[offset]; int optlen; if (opttype == type) @@ -77,7 +78,7 @@ int ipv6_find_tlv(struct sk_buff *skb, i optlen = 1; break; default: - optlen = skb->nh.raw[offset + 1] + 2; + optlen = nh[offset + 1] + 2; if (optlen > len) goto bad; break; @@ -113,7 +114,7 @@ static int ip6_tlvopt_unknown(struct sk_ { struct sk_buff *skb = *skbp; - switch ((skb->nh.raw[optoff] & 0xC0) >> 6) { + switch ((skb_network_header(skb)[optoff] & 0xC0) >> 6) { case 0: /* ignore */ return 1; @@ -124,12 +125,12 @@ static int ip6_tlvopt_unknown(struct sk_ /* Actually, it is redundant check. icmp_send will recheck in any case. */ - if (ipv6_addr_is_multicast(&skb->nh.ipv6h->daddr)) + if (ipv6_addr_is_multicast(&ipv6_hdr(skb)->daddr)) break; case 2: /* send ICMP PARM PROB regardless and drop packet */ icmpv6_param_prob(skb, ICMPV6_UNK_OPTION, optoff); return 0; - }; + } kfree_skb(skb); return 0; @@ -141,19 +142,20 @@ static int ip6_parse_tlv(struct tlvtype_ { struct sk_buff *skb = *skbp; struct tlvtype_proc *curr; - int off = skb->h.raw - skb->nh.raw; - int len = ((skb->h.raw[1]+1)<<3); + const unsigned char *nh = skb_network_header(skb); + int off = skb_network_header_len(skb); + int len = (skb_transport_header(skb)[1] + 1) << 3; - if ((skb->h.raw + len) - skb->data > skb_headlen(skb)) + if (skb_transport_offset(skb) + len > skb_headlen(skb)) goto bad; off += 2; len -= 2; while (len > 0) { - int optlen = skb->nh.raw[off+1]+2; + int optlen = nh[off + 1] + 2; - switch (skb->nh.raw[off]) { + switch (nh[off]) { case IPV6_TLV_PAD0: optlen = 1; break; @@ -165,7 +167,7 @@ static int ip6_parse_tlv(struct tlvtype_ if (optlen > len) goto bad; for (curr=procs; curr->type >= 0; curr++) { - if (curr->type == skb->nh.raw[off]) { + if (curr->type == nh[off]) { /* type specific length/alignment checks will be performed in the func(). */ @@ -200,7 +202,7 @@ static int ipv6_dest_hao(struct sk_buff struct sk_buff *skb = *skbp; struct ipv6_destopt_hao *hao; struct inet6_skb_parm *opt = IP6CB(skb); - struct ipv6hdr *ipv6h = (struct ipv6hdr *)skb->nh.raw; + struct ipv6hdr *ipv6h = ipv6_hdr(skb); struct in6_addr tmp_addr; int ret; @@ -211,7 +213,7 @@ static int ipv6_dest_hao(struct sk_buff opt->dsthao = opt->dst1; opt->dst1 = 0; - hao = (struct ipv6_destopt_hao *)(skb->nh.raw + optoff); + hao = (struct ipv6_destopt_hao *)(skb_network_header(skb) + optoff); if (hao->length != 16) { LIMIT_NETDEBUG( @@ -244,8 +246,9 @@ static int ipv6_dest_hao(struct sk_buff /* update all variable using below by copied skbuff */ *skbp = skb = skb2; - hao = (struct ipv6_destopt_hao *)(skb2->nh.raw + optoff); - ipv6h = (struct ipv6hdr *)skb2->nh.raw; + hao = (struct ipv6_destopt_hao *)(skb_network_header(skb2) + + optoff); + ipv6h = ipv6_hdr(skb2); } if (skb->ip_summed == CHECKSUM_COMPLETE) @@ -255,7 +258,7 @@ static int ipv6_dest_hao(struct sk_buff ipv6_addr_copy(&ipv6h->saddr, &hao->addr); ipv6_addr_copy(&hao->addr, &tmp_addr); - if (skb->tstamp.off_sec == 0) + if (skb->tstamp.tv64 == 0) __net_timestamp(skb); return 1; @@ -285,16 +288,16 @@ static int ipv6_destopt_rcv(struct sk_bu #endif struct dst_entry *dst; - if (!pskb_may_pull(skb, (skb->h.raw-skb->data)+8) || - !pskb_may_pull(skb, (skb->h.raw-skb->data)+((skb->h.raw[1]+1)<<3))) { + if (!pskb_may_pull(skb, skb_transport_offset(skb) + 8) || + !pskb_may_pull(skb, (skb_transport_offset(skb) + + ((skb_transport_header(skb)[1] + 1) << 3)))) { IP6_INC_STATS_BH(ip6_dst_idev(skb->dst), IPSTATS_MIB_INHDRERRORS); kfree_skb(skb); return -1; } - opt->lastopt = skb->h.raw - skb->nh.raw; - opt->dst1 = skb->h.raw - skb->nh.raw; + opt->lastopt = opt->dst1 = skb_network_header_len(skb); #ifdef CONFIG_IPV6_MIP6 dstbuf = opt->dst1; #endif @@ -303,7 +306,7 @@ static int ipv6_destopt_rcv(struct sk_bu if (ip6_parse_tlv(tlvprocdestopt_lst, skbp)) { dst_release(dst); skb = *skbp; - skb->h.raw += ((skb->h.raw[1]+1)<<3); + skb->transport_header += (skb_transport_header(skb)[1] + 1) << 3; opt = IP6CB(skb); #ifdef CONFIG_IPV6_MIP6 opt->nhoff = dstbuf; @@ -367,17 +370,18 @@ static int ipv6_rthdr_rcv(struct sk_buff struct ipv6_rt_hdr *hdr; struct rt0_hdr *rthdr; - if (!pskb_may_pull(skb, (skb->h.raw-skb->data)+8) || - !pskb_may_pull(skb, (skb->h.raw-skb->data)+((skb->h.raw[1]+1)<<3))) { + if (!pskb_may_pull(skb, skb_transport_offset(skb) + 8) || + !pskb_may_pull(skb, (skb_transport_offset(skb) + + ((skb_transport_header(skb)[1] + 1) << 3)))) { IP6_INC_STATS_BH(ip6_dst_idev(skb->dst), IPSTATS_MIB_INHDRERRORS); kfree_skb(skb); return -1; } - hdr = (struct ipv6_rt_hdr *) skb->h.raw; + hdr = (struct ipv6_rt_hdr *)skb_transport_header(skb); - if (ipv6_addr_is_multicast(&skb->nh.ipv6h->daddr) || + if (ipv6_addr_is_multicast(&ipv6_hdr(skb)->daddr) || skb->pkt_type != PACKET_HOST) { IP6_INC_STATS_BH(ip6_dst_idev(skb->dst), IPSTATS_MIB_INADDRERRORS); @@ -405,12 +409,11 @@ looped_back: break; } - opt->lastopt = skb->h.raw - skb->nh.raw; - opt->srcrt = skb->h.raw - skb->nh.raw; - skb->h.raw += (hdr->hdrlen + 1) << 3; + opt->lastopt = opt->srcrt = skb_network_header_len(skb); + skb->transport_header += (hdr->hdrlen + 1) << 3; opt->dst0 = opt->dst1; opt->dst1 = 0; - opt->nhoff = (&hdr->nexthdr) - skb->nh.raw; + opt->nhoff = (&hdr->nexthdr) - skb_network_header(skb); return 1; } @@ -419,7 +422,9 @@ looped_back: if (hdr->hdrlen & 0x01) { IP6_INC_STATS_BH(ip6_dst_idev(skb->dst), IPSTATS_MIB_INHDRERRORS); - icmpv6_param_prob(skb, ICMPV6_HDR_FIELD, (&hdr->hdrlen) - skb->nh.raw); + icmpv6_param_prob(skb, ICMPV6_HDR_FIELD, + ((&hdr->hdrlen) - + skb_network_header(skb))); return -1; } break; @@ -437,7 +442,8 @@ looped_back: default: IP6_INC_STATS_BH(ip6_dst_idev(skb->dst), IPSTATS_MIB_INHDRERRORS); - icmpv6_param_prob(skb, ICMPV6_HDR_FIELD, (&hdr->type) - skb->nh.raw); + icmpv6_param_prob(skb, ICMPV6_HDR_FIELD, + (&hdr->type) - skb_network_header(skb)); return -1; } @@ -451,7 +457,9 @@ looped_back: if (hdr->segments_left > n) { IP6_INC_STATS_BH(ip6_dst_idev(skb->dst), IPSTATS_MIB_INHDRERRORS); - icmpv6_param_prob(skb, ICMPV6_HDR_FIELD, (&hdr->segments_left) - skb->nh.raw); + icmpv6_param_prob(skb, ICMPV6_HDR_FIELD, + ((&hdr->segments_left) - + skb_network_header(skb))); return -1; } @@ -470,7 +478,7 @@ looped_back: kfree_skb(skb); *skbp = skb = skb2; opt = IP6CB(skb2); - hdr = (struct ipv6_rt_hdr *) skb2->h.raw; + hdr = (struct ipv6_rt_hdr *)skb_transport_header(skb2); } if (skb->ip_summed == CHECKSUM_COMPLETE) @@ -486,7 +494,7 @@ looped_back: #ifdef CONFIG_IPV6_MIP6 case IPV6_SRCRT_TYPE_2: if (xfrm6_input_addr(skb, (xfrm_address_t *)addr, - (xfrm_address_t *)&skb->nh.ipv6h->saddr, + (xfrm_address_t *)&ipv6_hdr(skb)->saddr, IPPROTO_ROUTING) < 0) { IP6_INC_STATS_BH(ip6_dst_idev(skb->dst), IPSTATS_MIB_INADDRERRORS); @@ -513,19 +521,19 @@ looped_back: } ipv6_addr_copy(&daddr, addr); - ipv6_addr_copy(addr, &skb->nh.ipv6h->daddr); - ipv6_addr_copy(&skb->nh.ipv6h->daddr, &daddr); + ipv6_addr_copy(addr, &ipv6_hdr(skb)->daddr); + ipv6_addr_copy(&ipv6_hdr(skb)->daddr, &daddr); dst_release(xchg(&skb->dst, NULL)); ip6_route_input(skb); if (skb->dst->error) { - skb_push(skb, skb->data - skb->nh.raw); + skb_push(skb, skb->data - skb_network_header(skb)); dst_input(skb); return -1; } if (skb->dst->dev->flags&IFF_LOOPBACK) { - if (skb->nh.ipv6h->hop_limit <= 1) { + if (ipv6_hdr(skb)->hop_limit <= 1) { IP6_INC_STATS_BH(ip6_dst_idev(skb->dst), IPSTATS_MIB_INHDRERRORS); icmpv6_send(skb, ICMPV6_TIME_EXCEED, ICMPV6_EXC_HOPLIMIT, @@ -533,11 +541,11 @@ looped_back: kfree_skb(skb); return -1; } - skb->nh.ipv6h->hop_limit--; + ipv6_hdr(skb)->hop_limit--; goto looped_back; } - skb_push(skb, skb->data - skb->nh.raw); + skb_push(skb, skb->data - skb_network_header(skb)); dst_input(skb); return -1; } @@ -628,13 +636,14 @@ EXPORT_SYMBOL_GPL(ipv6_invert_rthdr); static int ipv6_hop_ra(struct sk_buff **skbp, int optoff) { struct sk_buff *skb = *skbp; + const unsigned char *nh = skb_network_header(skb); - if (skb->nh.raw[optoff+1] == 2) { + if (nh[optoff + 1] == 2) { IP6CB(skb)->ra = optoff; return 1; } LIMIT_NETDEBUG(KERN_DEBUG "ipv6_hop_ra: wrong RA length %d\n", - skb->nh.raw[optoff+1]); + nh[optoff + 1]); kfree_skb(skb); return 0; } @@ -644,23 +653,24 @@ static int ipv6_hop_ra(struct sk_buff ** static int ipv6_hop_jumbo(struct sk_buff **skbp, int optoff) { struct sk_buff *skb = *skbp; + const unsigned char *nh = skb_network_header(skb); u32 pkt_len; - if (skb->nh.raw[optoff+1] != 4 || (optoff&3) != 2) { + if (nh[optoff + 1] != 4 || (optoff & 3) != 2) { LIMIT_NETDEBUG(KERN_DEBUG "ipv6_hop_jumbo: wrong jumbo opt length/alignment %d\n", - skb->nh.raw[optoff+1]); + nh[optoff+1]); IP6_INC_STATS_BH(ip6_dst_idev(skb->dst), IPSTATS_MIB_INHDRERRORS); goto drop; } - pkt_len = ntohl(*(__be32*)(skb->nh.raw+optoff+2)); + pkt_len = ntohl(*(__be32 *)(nh + optoff + 2)); if (pkt_len <= IPV6_MAXPLEN) { IP6_INC_STATS_BH(ip6_dst_idev(skb->dst), IPSTATS_MIB_INHDRERRORS); icmpv6_param_prob(skb, ICMPV6_HDR_FIELD, optoff+2); return 0; } - if (skb->nh.ipv6h->payload_len) { + if (ipv6_hdr(skb)->payload_len) { IP6_INC_STATS_BH(ip6_dst_idev(skb->dst), IPSTATS_MIB_INHDRERRORS); icmpv6_param_prob(skb, ICMPV6_HDR_FIELD, optoff); return 0; @@ -699,13 +709,14 @@ int ipv6_parse_hopopts(struct sk_buff ** struct inet6_skb_parm *opt = IP6CB(skb); /* - * skb->nh.raw is equal to skb->data, and - * skb->h.raw - skb->nh.raw is always equal to + * skb_network_header(skb) is equal to skb->data, and + * skb_network_header_len(skb) is always equal to * sizeof(struct ipv6hdr) by definition of * hop-by-hop options. */ if (!pskb_may_pull(skb, sizeof(struct ipv6hdr) + 8) || - !pskb_may_pull(skb, sizeof(struct ipv6hdr) + ((skb->h.raw[1] + 1) << 3))) { + !pskb_may_pull(skb, (sizeof(struct ipv6hdr) + + ((skb_transport_header(skb)[1] + 1) << 3)))) { kfree_skb(skb); return -1; } @@ -713,7 +724,7 @@ int ipv6_parse_hopopts(struct sk_buff ** opt->hop = sizeof(struct ipv6hdr); if (ip6_parse_tlv(tlvprochopopt_lst, skbp)) { skb = *skbp; - skb->h.raw += (skb->h.raw[1]+1)<<3; + skb->transport_header += (skb_transport_header(skb)[1] + 1) << 3; opt = IP6CB(skb); opt->nhoff = sizeof(struct ipv6hdr); return 1; @@ -782,6 +793,8 @@ void ipv6_push_nfrag_opts(struct sk_buff ipv6_push_exthdr(skb, proto, NEXTHDR_HOP, opt->hopopt); } +EXPORT_SYMBOL(ipv6_push_nfrag_opts); + void ipv6_push_frag_opts(struct sk_buff *skb, struct ipv6_txoptions *opt, u8 *proto) { if (opt->dst1opt) diff -puN net/ipv6/fib6_rules.c~git-net net/ipv6/fib6_rules.c --- a/net/ipv6/fib6_rules.c~git-net +++ a/net/ipv6/fib6_rules.c @@ -17,6 +17,7 @@ #include #include +#include #include #include @@ -95,8 +96,27 @@ static int fib6_rule_action(struct fib_r if (table) rt = lookup(table, flp, flags); - if (rt != &ip6_null_entry) + if (rt != &ip6_null_entry) { + struct fib6_rule *r = (struct fib6_rule *)rule; + + /* + * If we need to find a source address for this traffic, + * we check the result if it meets requirement of the rule. + */ + if ((rule->flags & FIB_RULE_FIND_SADDR) && + r->src.plen && !(flags & RT6_LOOKUP_F_HAS_SADDR)) { + struct in6_addr saddr; + if (ipv6_get_saddr(&rt->u.dst, &flp->fl6_dst, + &saddr)) + goto again; + if (!ipv6_prefix_equal(&saddr, &r->src.addr, + r->src.plen)) + goto again; + ipv6_addr_copy(&flp->fl6_src, &saddr); + } goto out; + } +again: dst_release(&rt->u.dst); rt = NULL; goto out; @@ -117,9 +137,17 @@ static int fib6_rule_match(struct fib_ru !ipv6_prefix_equal(&fl->fl6_dst, &r->dst.addr, r->dst.plen)) return 0; + /* + * If FIB_RULE_FIND_SADDR is set and we do not have a + * source address for the traffic, we defer check for + * source address. + */ if (r->src.plen) { - if (!(flags & RT6_LOOKUP_F_HAS_SADDR) || - !ipv6_prefix_equal(&fl->fl6_src, &r->src.addr, r->src.plen)) + if (flags & RT6_LOOKUP_F_HAS_SADDR) { + if (!ipv6_prefix_equal(&fl->fl6_src, &r->src.addr, + r->src.plen)) + return 0; + } else if (!(r->common.flags & FIB_RULE_FIND_SADDR)) return 0; } @@ -216,11 +244,6 @@ nla_put_failure: return -ENOBUFS; } -int fib6_rules_dump(struct sk_buff *skb, struct netlink_callback *cb) -{ - return fib_rules_dump(skb, cb, AF_INET6); -} - static u32 fib6_rule_default_pref(void) { return 0x3FFF; diff -puN net/ipv6/icmp.c~git-net net/ipv6/icmp.c --- a/net/ipv6/icmp.c~git-net +++ a/net/ipv6/icmp.c @@ -68,6 +68,7 @@ #include DEFINE_SNMP_STAT(struct icmpv6_mib, icmpv6_statistics) __read_mostly; +EXPORT_SYMBOL(icmpv6_statistics); /* * The ICMP socket(s). This is the most convenient way to flow control @@ -128,9 +129,9 @@ void icmpv6_param_prob(struct sk_buff *s static int is_ineligible(struct sk_buff *skb) { - int ptr = (u8*)(skb->nh.ipv6h+1) - skb->data; + int ptr = (u8 *)(ipv6_hdr(skb) + 1) - skb->data; int len = skb->len - ptr; - __u8 nexthdr = skb->nh.ipv6h->nexthdr; + __u8 nexthdr = ipv6_hdr(skb)->nexthdr; if (len < 0) return 1; @@ -205,7 +206,7 @@ static __inline__ int opt_unrec(struct s { u8 _optval, *op; - offset += skb->nh.raw - skb->data; + offset += skb_network_offset(skb); op = skb_header_pointer(skb, offset, sizeof(_optval), &_optval); if (op == NULL) return 1; @@ -221,7 +222,7 @@ static int icmpv6_push_pending_frames(st if ((skb = skb_peek(&sk->sk_write_queue)) == NULL) goto out; - icmp6h = (struct icmp6hdr*) skb->h.raw; + icmp6h = icmp6_hdr(skb); memcpy(icmp6h, thdr, sizeof(struct icmp6hdr)); icmp6h->icmp6_cksum = 0; @@ -274,7 +275,7 @@ static int icmpv6_getfrag(void *from, ch #ifdef CONFIG_IPV6_MIP6 static void mip6_addr_swap(struct sk_buff *skb) { - struct ipv6hdr *iph = skb->nh.ipv6h; + struct ipv6hdr *iph = ipv6_hdr(skb); struct inet6_skb_parm *opt = IP6CB(skb); struct ipv6_destopt_hao *hao; struct in6_addr tmp; @@ -283,7 +284,8 @@ static void mip6_addr_swap(struct sk_buf if (opt->dsthao) { off = ipv6_find_tlv(skb, opt->dsthao, IPV6_TLV_HAO); if (likely(off >= 0)) { - hao = (struct ipv6_destopt_hao *)(skb->nh.raw + off); + hao = (struct ipv6_destopt_hao *) + (skb_network_header(skb) + off); ipv6_addr_copy(&tmp, &iph->saddr); ipv6_addr_copy(&iph->saddr, &hao->addr); ipv6_addr_copy(&hao->addr, &tmp); @@ -301,7 +303,7 @@ void icmpv6_send(struct sk_buff *skb, in struct net_device *dev) { struct inet6_dev *idev = NULL; - struct ipv6hdr *hdr = skb->nh.ipv6h; + struct ipv6hdr *hdr = ipv6_hdr(skb); struct sock *sk; struct ipv6_pinfo *np; struct in6_addr *saddr = NULL; @@ -315,7 +317,8 @@ void icmpv6_send(struct sk_buff *skb, in int hlimit, tclass; int err = 0; - if ((u8*)hdr < skb->head || (u8*)(hdr+1) > skb->tail) + if ((u8 *)hdr < skb->head || + (skb->network_header + sizeof(*hdr)) > skb->tail) return; /* @@ -430,7 +433,7 @@ void icmpv6_send(struct sk_buff *skb, in tclass = 0; msg.skb = skb; - msg.offset = skb->nh.raw - skb->data; + msg.offset = skb_network_offset(skb); msg.type = type; len = skb->len - msg.offset; @@ -466,13 +469,15 @@ out: icmpv6_xmit_unlock(); } +EXPORT_SYMBOL(icmpv6_send); + static void icmpv6_echo_reply(struct sk_buff *skb) { struct sock *sk; struct inet6_dev *idev; struct ipv6_pinfo *np; struct in6_addr *saddr = NULL; - struct icmp6hdr *icmph = (struct icmp6hdr *) skb->h.raw; + struct icmp6hdr *icmph = icmp6_hdr(skb); struct icmp6hdr tmp_hdr; struct flowi fl; struct icmpv6_msg msg; @@ -481,7 +486,7 @@ static void icmpv6_echo_reply(struct sk_ int hlimit; int tclass; - saddr = &skb->nh.ipv6h->daddr; + saddr = &ipv6_hdr(skb)->daddr; if (!ipv6_unicast_destination(skb)) saddr = NULL; @@ -491,7 +496,7 @@ static void icmpv6_echo_reply(struct sk_ memset(&fl, 0, sizeof(fl)); fl.proto = IPPROTO_ICMPV6; - ipv6_addr_copy(&fl.fl6_dst, &skb->nh.ipv6h->saddr); + ipv6_addr_copy(&fl.fl6_dst, &ipv6_hdr(skb)->saddr); if (saddr) ipv6_addr_copy(&fl.fl6_src, saddr); fl.oif = skb->dev->ifindex; @@ -579,8 +584,8 @@ static void icmpv6_notify(struct sk_buff if (!pskb_may_pull(skb, inner_offset+8)) return; - saddr = &skb->nh.ipv6h->saddr; - daddr = &skb->nh.ipv6h->daddr; + saddr = &ipv6_hdr(skb)->saddr; + daddr = &ipv6_hdr(skb)->daddr; /* BUGGG_FUTURE: we should try to parse exthdrs in this packet. Without this we will not able f.e. to make source routed @@ -624,8 +629,8 @@ static int icmpv6_rcv(struct sk_buff **p ICMP6_INC_STATS_BH(idev, ICMP6_MIB_INMSGS); - saddr = &skb->nh.ipv6h->saddr; - daddr = &skb->nh.ipv6h->daddr; + saddr = &ipv6_hdr(skb)->saddr; + daddr = &ipv6_hdr(skb)->daddr; /* Perform checksum. */ switch (skb->ip_summed) { @@ -647,7 +652,7 @@ static int icmpv6_rcv(struct sk_buff **p if (!pskb_pull(skb, sizeof(struct icmp6hdr))) goto discard_it; - hdr = (struct icmp6hdr *) skb->h.raw; + hdr = icmp6_hdr(skb); type = hdr->icmp6_type; @@ -673,7 +678,7 @@ static int icmpv6_rcv(struct sk_buff **p */ if (!pskb_may_pull(skb, sizeof(struct ipv6hdr))) goto discard_it; - hdr = (struct icmp6hdr *) skb->h.raw; + hdr = icmp6_hdr(skb); orig_hdr = (struct ipv6hdr *) (hdr + 1); rt6_pmtu_discovery(&orig_hdr->daddr, &orig_hdr->saddr, dev, ntohl(hdr->icmp6_mtu)); @@ -727,7 +732,8 @@ static int icmpv6_rcv(struct sk_buff **p */ icmpv6_notify(skb, type, hdr->icmp6_code, hdr->icmp6_mtu); - }; + } + kfree_skb(skb); return 0; @@ -860,11 +866,13 @@ int icmpv6_err_convert(int type, int cod case ICMPV6_TIME_EXCEED: *err = EHOSTUNREACH; break; - }; + } return fatal; } +EXPORT_SYMBOL(icmpv6_err_convert); + #ifdef CONFIG_SYSCTL ctl_table ipv6_icmp_table[] = { { diff -puN net/ipv6/ip6_fib.c~git-net net/ipv6/ip6_fib.c --- a/net/ipv6/ip6_fib.c~git-net +++ a/net/ipv6/ip6_fib.c @@ -359,7 +359,7 @@ end: return res; } -int inet6_dump_fib(struct sk_buff *skb, struct netlink_callback *cb) +static int inet6_dump_fib(struct sk_buff *skb, struct netlink_callback *cb) { unsigned int h, s_h; unsigned int e = 0, s_e; @@ -1486,6 +1486,8 @@ void __init fib6_init(void) NULL, NULL); fib6_tables_init(); + + __rtnl_register(PF_INET6, RTM_GETROUTE, NULL, inet6_dump_fib); } void fib6_gc_cleanup(void) diff -puN net/ipv6/ip6_input.c~git-net net/ipv6/ip6_input.c --- a/net/ipv6/ip6_input.c~git-net +++ a/net/ipv6/ip6_input.c @@ -96,12 +96,12 @@ int ipv6_rcv(struct sk_buff *skb, struct if (unlikely(!pskb_may_pull(skb, sizeof(*hdr)))) goto err; - hdr = skb->nh.ipv6h; + hdr = ipv6_hdr(skb); if (hdr->version != 6) goto err; - skb->h.raw = (u8 *)(hdr + 1); + skb->transport_header = skb->network_header + sizeof(*hdr); IP6CB(skb)->nhoff = offsetof(struct ipv6hdr, nexthdr); pkt_len = ntohs(hdr->payload_len); @@ -116,7 +116,7 @@ int ipv6_rcv(struct sk_buff *skb, struct IP6_INC_STATS_BH(idev, IPSTATS_MIB_INHDRERRORS); goto drop; } - hdr = skb->nh.ipv6h; + hdr = ipv6_hdr(skb); } if (hdr->nexthdr == NEXTHDR_HOP) { @@ -160,10 +160,10 @@ static inline int ip6_input_finish(struc rcu_read_lock(); resubmit: idev = ip6_dst_idev(skb->dst); - if (!pskb_pull(skb, skb->h.raw - skb->data)) + if (!pskb_pull(skb, skb_transport_offset(skb))) goto discard; nhoff = IP6CB(skb)->nhoff; - nexthdr = skb->nh.raw[nhoff]; + nexthdr = skb_network_header(skb)[nhoff]; raw_sk = sk_head(&raw_v6_htable[nexthdr & (MAX_INET_PROTOS - 1)]); if (raw_sk && !ipv6_raw_deliver(skb, nexthdr)) @@ -181,9 +181,9 @@ resubmit: indefinitely. */ nf_reset(skb); - skb_postpull_rcsum(skb, skb->nh.raw, - skb->h.raw - skb->nh.raw); - hdr = skb->nh.ipv6h; + skb_postpull_rcsum(skb, skb_network_header(skb), + skb_network_header_len(skb)); + hdr = ipv6_hdr(skb); if (ipv6_addr_is_multicast(&hdr->daddr) && !ipv6_chk_mcast_addr(skb->dev, &hdr->daddr, &hdr->saddr) && @@ -234,7 +234,7 @@ int ip6_mc_input(struct sk_buff *skb) IP6_INC_STATS_BH(ip6_dst_idev(skb->dst), IPSTATS_MIB_INMCASTPKTS); - hdr = skb->nh.ipv6h; + hdr = ipv6_hdr(skb); deliver = likely(!(skb->dev->flags & (IFF_PROMISC|IFF_ALLMULTI))) || ipv6_chk_mcast_addr(skb->dev, &hdr->daddr, NULL); diff -puN net/ipv6/ip6_output.c~git-net net/ipv6/ip6_output.c --- a/net/ipv6/ip6_output.c~git-net +++ a/net/ipv6/ip6_output.c @@ -88,8 +88,8 @@ static inline int ip6_output_finish(stru /* dev_loopback_xmit for use with netfilter. */ static int ip6_dev_loopback_xmit(struct sk_buff *newskb) { - newskb->mac.raw = newskb->data; - __skb_pull(newskb, newskb->nh.raw - newskb->data); + skb_reset_mac_header(newskb); + __skb_pull(newskb, skb_network_offset(newskb)); newskb->pkt_type = PACKET_LOOPBACK; newskb->ip_summed = CHECKSUM_UNNECESSARY; BUG_TRAP(newskb->dst); @@ -107,13 +107,13 @@ static int ip6_output2(struct sk_buff *s skb->protocol = htons(ETH_P_IPV6); skb->dev = dev; - if (ipv6_addr_is_multicast(&skb->nh.ipv6h->daddr)) { + if (ipv6_addr_is_multicast(&ipv6_hdr(skb)->daddr)) { struct ipv6_pinfo* np = skb->sk ? inet6_sk(skb->sk) : NULL; struct inet6_dev *idev = ip6_dst_idev(skb->dst); if (!(dev->flags & IFF_LOOPBACK) && (!np || np->mc_loop) && - ipv6_chk_mcast_addr(dev, &skb->nh.ipv6h->daddr, - &skb->nh.ipv6h->saddr)) { + ipv6_chk_mcast_addr(dev, &ipv6_hdr(skb)->daddr, + &ipv6_hdr(skb)->saddr)) { struct sk_buff *newskb = skb_clone(skb, GFP_ATOMIC); /* Do not check for IFF_ALLMULTI; multicast routing @@ -124,7 +124,7 @@ static int ip6_output2(struct sk_buff *s newskb->dev, ip6_dev_loopback_xmit); - if (skb->nh.ipv6h->hop_limit == 0) { + if (ipv6_hdr(skb)->hop_limit == 0) { IP6_INC_STATS(idev, IPSTATS_MIB_OUTDISCARDS); kfree_skb(skb); return 0; @@ -137,9 +137,17 @@ static int ip6_output2(struct sk_buff *s return NF_HOOK(PF_INET6, NF_IP6_POST_ROUTING, skb,NULL, skb->dev,ip6_output_finish); } +static inline int ip6_skb_dst_mtu(struct sk_buff *skb) +{ + struct ipv6_pinfo *np = skb->sk ? inet6_sk(skb->sk) : NULL; + + return (np && np->pmtudisc == IPV6_PMTUDISC_PROBE) ? + skb->dst->dev->mtu : dst_mtu(skb->dst); +} + int ip6_output(struct sk_buff *skb) { - if ((skb->len > dst_mtu(skb->dst) && !skb_is_gso(skb)) || + if ((skb->len > ip6_skb_dst_mtu(skb) && !skb_is_gso(skb)) || dst_allfrag(skb->dst)) return ip6_fragment(skb, ip6_output2); else @@ -191,7 +199,9 @@ int ip6_xmit(struct sock *sk, struct sk_ ipv6_push_nfrag_opts(skb, opt, &proto, &first_hop); } - hdr = skb->nh.ipv6h = (struct ipv6hdr*)skb_push(skb, sizeof(struct ipv6hdr)); + skb_push(skb, sizeof(struct ipv6hdr)); + skb_reset_network_header(skb); + hdr = ipv6_hdr(skb); /* * Fill in the IPv6 header @@ -239,6 +249,8 @@ int ip6_xmit(struct sock *sk, struct sk_ return -EMSGSIZE; } +EXPORT_SYMBOL(ip6_xmit); + /* * To avoid extra problems ND packets are send through this * routine. It's code duplication but I really want to avoid @@ -259,8 +271,9 @@ int ip6_nd_hdr(struct sock *sk, struct s totlen = len + sizeof(struct ipv6hdr); - hdr = (struct ipv6hdr *) skb_put(skb, sizeof(struct ipv6hdr)); - skb->nh.ipv6h = hdr; + skb_reset_network_header(skb); + skb_put(skb, sizeof(struct ipv6hdr)); + hdr = ipv6_hdr(skb); *(__be32*)hdr = htonl(0x60000000); @@ -305,7 +318,7 @@ static int ip6_call_ra_chain(struct sk_b static int ip6_forward_proxy_check(struct sk_buff *skb) { - struct ipv6hdr *hdr = skb->nh.ipv6h; + struct ipv6hdr *hdr = ipv6_hdr(skb); u8 nexthdr = hdr->nexthdr; int offset; @@ -319,10 +332,11 @@ static int ip6_forward_proxy_check(struc if (nexthdr == IPPROTO_ICMPV6) { struct icmp6hdr *icmp6; - if (!pskb_may_pull(skb, skb->nh.raw + offset + 1 - skb->data)) + if (!pskb_may_pull(skb, (skb_network_header(skb) + + offset + 1 - skb->data))) return 0; - icmp6 = (struct icmp6hdr *)(skb->nh.raw + offset); + icmp6 = (struct icmp6hdr *)(skb_network_header(skb) + offset); switch (icmp6->icmp6_type) { case NDISC_ROUTER_SOLICITATION: @@ -361,7 +375,7 @@ static inline int ip6_forward_finish(str int ip6_forward(struct sk_buff *skb) { struct dst_entry *dst = skb->dst; - struct ipv6hdr *hdr = skb->nh.ipv6h; + struct ipv6hdr *hdr = ipv6_hdr(skb); struct inet6_skb_parm *opt = IP6CB(skb); if (ipv6_devconf.forwarding == 0) @@ -372,7 +386,7 @@ int ip6_forward(struct sk_buff *skb) goto drop; } - skb->ip_summed = CHECKSUM_NONE; + skb_forward_csum(skb); /* * We DO NOT make any processing on @@ -388,7 +402,7 @@ int ip6_forward(struct sk_buff *skb) * that different fragments will go along one path. --ANK */ if (opt->ra) { - u8 *ptr = skb->nh.raw + opt->ra; + u8 *ptr = skb_network_header(skb) + opt->ra; if (ip6_call_ra_chain(skb, (ptr[2]<<8) + ptr[3])) return 0; } @@ -470,7 +484,7 @@ int ip6_forward(struct sk_buff *skb) goto drop; } - hdr = skb->nh.ipv6h; + hdr = ipv6_hdr(skb); /* Mangling hops number delayed to point after skb COW */ @@ -499,33 +513,18 @@ static void ip6_copy_metadata(struct sk_ #ifdef CONFIG_NET_SCHED to->tc_index = from->tc_index; #endif -#ifdef CONFIG_NETFILTER - /* Connection association is same as pre-frag packet */ - nf_conntrack_put(to->nfct); - to->nfct = from->nfct; - nf_conntrack_get(to->nfct); - to->nfctinfo = from->nfctinfo; -#if defined(CONFIG_NF_CONNTRACK) || defined(CONFIG_NF_CONNTRACK_MODULE) - nf_conntrack_put_reasm(to->nfct_reasm); - to->nfct_reasm = from->nfct_reasm; - nf_conntrack_get_reasm(to->nfct_reasm); -#endif -#ifdef CONFIG_BRIDGE_NETFILTER - nf_bridge_put(to->nf_bridge); - to->nf_bridge = from->nf_bridge; - nf_bridge_get(to->nf_bridge); -#endif -#endif + nf_copy(to, from); skb_copy_secmark(to, from); } int ip6_find_1stfragopt(struct sk_buff *skb, u8 **nexthdr) { u16 offset = sizeof(struct ipv6hdr); - struct ipv6_opt_hdr *exthdr = (struct ipv6_opt_hdr*)(skb->nh.ipv6h + 1); - unsigned int packet_len = skb->tail - skb->nh.raw; + struct ipv6_opt_hdr *exthdr = + (struct ipv6_opt_hdr *)(ipv6_hdr(skb) + 1); + unsigned int packet_len = skb->tail - skb->network_header; int found_rhdr = 0; - *nexthdr = &skb->nh.ipv6h->nexthdr; + *nexthdr = &ipv6_hdr(skb)->nexthdr; while (offset + 1 <= packet_len) { @@ -550,7 +549,8 @@ int ip6_find_1stfragopt(struct sk_buff * offset += ipv6_optlen(exthdr); *nexthdr = &exthdr->nexthdr; - exthdr = (struct ipv6_opt_hdr*)(skb->nh.raw + offset); + exthdr = (struct ipv6_opt_hdr *)(skb_network_header(skb) + + offset); } return offset; @@ -574,7 +574,20 @@ static int ip6_fragment(struct sk_buff * hlen = ip6_find_1stfragopt(skb, &prevhdr); nexthdr = *prevhdr; - mtu = dst_mtu(&rt->u.dst); + mtu = ip6_skb_dst_mtu(skb); + + /* We must not fragment if the socket is set to force MTU discovery + * or if the skb it not generated by a local socket. (This last + * check should be redundant, but it's free.) + */ + if (!np || np->pmtudisc >= IPV6_PMTUDISC_DO) { + skb->dev = skb->dst->dev; + icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu, skb->dev); + IP6_INC_STATS(ip6_dst_idev(skb->dst), IPSTATS_MIB_FRAGFAILS); + kfree_skb(skb); + return -EMSGSIZE; + } + if (np && np->frag_size < mtu) { if (np->frag_size) mtu = np->frag_size; @@ -616,7 +629,7 @@ static int ip6_fragment(struct sk_buff * /* BUILD HEADER */ *prevhdr = NEXTHDR_FRAGMENT; - tmp_hdr = kmemdup(skb->nh.raw, hlen, GFP_ATOMIC); + tmp_hdr = kmemdup(skb_network_header(skb), hlen, GFP_ATOMIC); if (!tmp_hdr) { IP6_INC_STATS(ip6_dst_idev(skb->dst), IPSTATS_MIB_FRAGFAILS); return -ENOMEM; @@ -624,8 +637,9 @@ static int ip6_fragment(struct sk_buff * __skb_pull(skb, hlen); fh = (struct frag_hdr*)__skb_push(skb, sizeof(struct frag_hdr)); - skb->nh.raw = __skb_push(skb, hlen); - memcpy(skb->nh.raw, tmp_hdr, hlen); + __skb_push(skb, hlen); + skb_reset_network_header(skb); + memcpy(skb_network_header(skb), tmp_hdr, hlen); ipv6_select_ident(skb, fh); fh->nexthdr = nexthdr; @@ -636,7 +650,8 @@ static int ip6_fragment(struct sk_buff * first_len = skb_pagelen(skb); skb->data_len = first_len - skb_headlen(skb); skb->len = first_len; - skb->nh.ipv6h->payload_len = htons(first_len - sizeof(struct ipv6hdr)); + ipv6_hdr(skb)->payload_len = htons(first_len - + sizeof(struct ipv6hdr)); dst_hold(&rt->u.dst); @@ -645,10 +660,12 @@ static int ip6_fragment(struct sk_buff * * before previous one went down. */ if (frag) { frag->ip_summed = CHECKSUM_NONE; - frag->h.raw = frag->data; + skb_reset_transport_header(frag); fh = (struct frag_hdr*)__skb_push(frag, sizeof(struct frag_hdr)); - frag->nh.raw = __skb_push(frag, hlen); - memcpy(frag->nh.raw, tmp_hdr, hlen); + __skb_push(frag, hlen); + skb_reset_network_header(frag); + memcpy(skb_network_header(frag), tmp_hdr, + hlen); offset += skb->len - hlen - sizeof(struct frag_hdr); fh->nexthdr = nexthdr; fh->reserved = 0; @@ -656,7 +673,9 @@ static int ip6_fragment(struct sk_buff * if (frag->next != NULL) fh->frag_off |= htons(IP6_MF); fh->identification = frag_id; - frag->nh.ipv6h->payload_len = htons(frag->len - sizeof(struct ipv6hdr)); + ipv6_hdr(frag)->payload_len = + htons(frag->len - + sizeof(struct ipv6hdr)); ip6_copy_metadata(frag, skb); } @@ -733,9 +752,10 @@ slow_path: ip6_copy_metadata(frag, skb); skb_reserve(frag, LL_RESERVED_SPACE(rt->u.dst.dev)); skb_put(frag, len + hlen + sizeof(struct frag_hdr)); - frag->nh.raw = frag->data; - fh = (struct frag_hdr*)(frag->data + hlen); - frag->h.raw = frag->data + hlen + sizeof(struct frag_hdr); + skb_reset_network_header(frag); + fh = (struct frag_hdr *)(skb_network_header(frag) + hlen); + frag->transport_header = (frag->network_header + hlen + + sizeof(struct frag_hdr)); /* * Charge the memory for the fragment to any owner @@ -747,7 +767,7 @@ slow_path: /* * Copy the packet header into the new buffer. */ - memcpy(frag->nh.raw, skb->data, hlen); + skb_copy_from_linear_data(skb, skb_network_header(frag), hlen); /* * Build fragment header. @@ -763,14 +783,15 @@ slow_path: /* * Copy a block of the IP datagram. */ - if (skb_copy_bits(skb, ptr, frag->h.raw, len)) + if (skb_copy_bits(skb, ptr, skb_transport_header(skb), len)) BUG(); left -= len; fh->frag_off = htons(offset); if (left > 0) fh->frag_off |= htons(IP6_MF); - frag->nh.ipv6h->payload_len = htons(frag->len - sizeof(struct ipv6hdr)); + ipv6_hdr(frag)->payload_len = htons(frag->len - + sizeof(struct ipv6hdr)); ptr += len; offset += len; @@ -861,6 +882,41 @@ static int ip6_dst_lookup_tail(struct so goto out_err_release; } +#ifdef CONFIG_IPV6_OPTIMISTIC_DAD + /* + * Here if the dst entry we've looked up + * has a neighbour entry that is in the INCOMPLETE + * state and the src address from the flow is + * marked as OPTIMISTIC, we release the found + * dst entry and replace it instead with the + * dst entry of the nexthop router + */ + if (!((*dst)->neighbour->nud_state & NUD_VALID)) { + struct inet6_ifaddr *ifp; + struct flowi fl_gw; + int redirect; + + ifp = ipv6_get_ifaddr(&fl->fl6_src, (*dst)->dev, 1); + + redirect = (ifp && ifp->flags & IFA_F_OPTIMISTIC); + if (ifp) + in6_ifa_put(ifp); + + if (redirect) { + /* + * We need to get the dst entry for the + * default router instead + */ + dst_release(*dst); + memcpy(&fl_gw, fl, sizeof(struct flowi)); + memset(&fl_gw.fl6_dst, 0, sizeof(struct in6_addr)); + *dst = ip6_route_output(sk, &fl_gw); + if ((err = (*dst)->error)) + goto out_err_release; + } + } +#endif + return 0; out_err_release: @@ -939,10 +995,10 @@ static inline int ip6_ufo_append_data(st skb_put(skb,fragheaderlen + transhdrlen); /* initialize network header pointer */ - skb->nh.raw = skb->data; + skb_reset_network_header(skb); /* initialize protocol header pointer */ - skb->h.raw = skb->data + fragheaderlen; + skb->transport_header = skb->network_header + fragheaderlen; skb->ip_summed = CHECKSUM_PARTIAL; skb->csum = 0; @@ -1015,7 +1071,8 @@ int ip6_append_data(struct sock *sk, int inet->cork.fl = *fl; np->cork.hop_limit = hlimit; np->cork.tclass = tclass; - mtu = dst_mtu(rt->u.dst.path); + mtu = np->pmtudisc == IPV6_PMTUDISC_PROBE ? + rt->u.dst.dev->mtu : dst_mtu(rt->u.dst.path); if (np->frag_size < mtu) { if (np->frag_size) mtu = np->frag_size; @@ -1162,10 +1219,10 @@ alloc_new_skb: * Find where to start putting bytes */ data = skb_put(skb, fraglen); - skb->nh.raw = data + exthdrlen; + skb_set_network_header(skb, exthdrlen); data += fragheaderlen; - skb->h.raw = data + exthdrlen; - + skb->transport_header = (skb->network_header + + fragheaderlen); if (fraggap) { skb->csum = skb_copy_and_csum_bits( skb_prev, maxfraglen, @@ -1288,10 +1345,10 @@ int ip6_push_pending_frames(struct sock tail_skb = &(skb_shinfo(skb)->frag_list); /* move skb->data to ip header from ext header */ - if (skb->data < skb->nh.raw) - __skb_pull(skb, skb->nh.raw - skb->data); + if (skb->data < skb_network_header(skb)) + __skb_pull(skb, skb_network_offset(skb)); while ((tmp_skb = __skb_dequeue(&sk->sk_write_queue)) != NULL) { - __skb_pull(tmp_skb, skb->h.raw - skb->nh.raw); + __skb_pull(tmp_skb, skb_network_header_len(skb)); *tail_skb = tmp_skb; tail_skb = &(tmp_skb->next); skb->len += tmp_skb->len; @@ -1303,13 +1360,15 @@ int ip6_push_pending_frames(struct sock } ipv6_addr_copy(final_dst, &fl->fl6_dst); - __skb_pull(skb, skb->h.raw - skb->nh.raw); + __skb_pull(skb, skb_network_header_len(skb)); if (opt && opt->opt_flen) ipv6_push_frag_opts(skb, opt, &proto); if (opt && opt->opt_nflen) ipv6_push_nfrag_opts(skb, opt, &proto, &final_dst); - skb->nh.ipv6h = hdr = (struct ipv6hdr*) skb_push(skb, sizeof(struct ipv6hdr)); + skb_push(skb, sizeof(struct ipv6hdr)); + skb_reset_network_header(skb); + hdr = ipv6_hdr(skb); *(__be32*)hdr = fl->fl6_flowlabel | htonl(0x60000000 | ((int)np->cork.tclass << 20)); diff -puN net/ipv6/ip6_tunnel.c~git-net net/ipv6/ip6_tunnel.c --- a/net/ipv6/ip6_tunnel.c~git-net +++ a/net/ipv6/ip6_tunnel.c @@ -1,14 +1,15 @@ /* - * IPv6 over IPv6 tunnel device + * IPv6 tunneling device * Linux INET6 implementation * * Authors: * Ville Nuorvala + * Yasuyuki Kozakai * * $Id$ * * Based on: - * linux/net/ipv6/sit.c + * linux/net/ipv6/sit.c and linux/net/ipv4/ipip.c * * RFC 2473 * @@ -24,6 +25,7 @@ #include #include #include +#include #include #include #include @@ -41,6 +43,7 @@ #include #include +#include #include #include #include @@ -51,7 +54,7 @@ #include MODULE_AUTHOR("Ville Nuorvala"); -MODULE_DESCRIPTION("IPv6-in-IPv6 tunnel"); +MODULE_DESCRIPTION("IPv6 tunneling device"); MODULE_LICENSE("GPL"); #define IPV6_TLV_TEL_DST_SIZE 8 @@ -63,6 +66,7 @@ MODULE_LICENSE("GPL"); #endif #define IPV6_TCLASS_MASK (IPV6_FLOWINFO_MASK & ~IPV6_FLOWLABEL_MASK) +#define IPV6_TCLASS_SHIFT 20 #define HASH_SIZE 32 @@ -70,12 +74,12 @@ MODULE_LICENSE("GPL"); (addr)->s6_addr32[2] ^ (addr)->s6_addr32[3]) & \ (HASH_SIZE - 1)) -static int ip6ip6_fb_tnl_dev_init(struct net_device *dev); -static int ip6ip6_tnl_dev_init(struct net_device *dev); -static void ip6ip6_tnl_dev_setup(struct net_device *dev); +static int ip6_fb_tnl_dev_init(struct net_device *dev); +static int ip6_tnl_dev_init(struct net_device *dev); +static void ip6_tnl_dev_setup(struct net_device *dev); /* the IPv6 tunnel fallback device */ -static struct net_device *ip6ip6_fb_tnl_dev; +static struct net_device *ip6_fb_tnl_dev; /* lists for storing tunnels in use */ @@ -84,7 +88,7 @@ static struct ip6_tnl *tnls_wc[1]; static struct ip6_tnl **tnls[2] = { tnls_wc, tnls_r_l }; /* lock for the tunnel lists */ -static DEFINE_RWLOCK(ip6ip6_lock); +static DEFINE_RWLOCK(ip6_tnl_lock); static inline struct dst_entry *ip6_tnl_dst_check(struct ip6_tnl *t) { @@ -115,7 +119,7 @@ static inline void ip6_tnl_dst_store(str } /** - * ip6ip6_tnl_lookup - fetch tunnel matching the end-point addresses + * ip6_tnl_lookup - fetch tunnel matching the end-point addresses * @remote: the address of the tunnel exit-point * @local: the address of the tunnel entry-point * @@ -126,7 +130,7 @@ static inline void ip6_tnl_dst_store(str **/ static struct ip6_tnl * -ip6ip6_tnl_lookup(struct in6_addr *remote, struct in6_addr *local) +ip6_tnl_lookup(struct in6_addr *remote, struct in6_addr *local) { unsigned h0 = HASH(remote); unsigned h1 = HASH(local); @@ -145,18 +149,18 @@ ip6ip6_tnl_lookup(struct in6_addr *remot } /** - * ip6ip6_bucket - get head of list matching given tunnel parameters + * ip6_tnl_bucket - get head of list matching given tunnel parameters * @p: parameters containing tunnel end-points * * Description: - * ip6ip6_bucket() returns the head of the list matching the + * ip6_tnl_bucket() returns the head of the list matching the * &struct in6_addr entries laddr and raddr in @p. * * Return: head of IPv6 tunnel list **/ static struct ip6_tnl ** -ip6ip6_bucket(struct ip6_tnl_parm *p) +ip6_tnl_bucket(struct ip6_tnl_parm *p) { struct in6_addr *remote = &p->raddr; struct in6_addr *local = &p->laddr; @@ -171,36 +175,36 @@ ip6ip6_bucket(struct ip6_tnl_parm *p) } /** - * ip6ip6_tnl_link - add tunnel to hash table + * ip6_tnl_link - add tunnel to hash table * @t: tunnel to be added **/ static void -ip6ip6_tnl_link(struct ip6_tnl *t) +ip6_tnl_link(struct ip6_tnl *t) { - struct ip6_tnl **tp = ip6ip6_bucket(&t->parms); + struct ip6_tnl **tp = ip6_tnl_bucket(&t->parms); t->next = *tp; - write_lock_bh(&ip6ip6_lock); + write_lock_bh(&ip6_tnl_lock); *tp = t; - write_unlock_bh(&ip6ip6_lock); + write_unlock_bh(&ip6_tnl_lock); } /** - * ip6ip6_tnl_unlink - remove tunnel from hash table + * ip6_tnl_unlink - remove tunnel from hash table * @t: tunnel to be removed **/ static void -ip6ip6_tnl_unlink(struct ip6_tnl *t) +ip6_tnl_unlink(struct ip6_tnl *t) { struct ip6_tnl **tp; - for (tp = ip6ip6_bucket(&t->parms); *tp; tp = &(*tp)->next) { + for (tp = ip6_tnl_bucket(&t->parms); *tp; tp = &(*tp)->next) { if (t == *tp) { - write_lock_bh(&ip6ip6_lock); + write_lock_bh(&ip6_tnl_lock); *tp = t->next; - write_unlock_bh(&ip6ip6_lock); + write_unlock_bh(&ip6_tnl_lock); break; } } @@ -237,12 +241,12 @@ static struct ip6_tnl *ip6_tnl_create(st if (i == IP6_TNL_MAX) goto failed; } - dev = alloc_netdev(sizeof (*t), name, ip6ip6_tnl_dev_setup); + dev = alloc_netdev(sizeof (*t), name, ip6_tnl_dev_setup); if (dev == NULL) goto failed; t = netdev_priv(dev); - dev->init = ip6ip6_tnl_dev_init; + dev->init = ip6_tnl_dev_init; t->parms = *p; if ((err = register_netdevice(dev)) < 0) { @@ -250,19 +254,19 @@ static struct ip6_tnl *ip6_tnl_create(st goto failed; } dev_hold(dev); - ip6ip6_tnl_link(t); + ip6_tnl_link(t); return t; failed: return NULL; } /** - * ip6ip6_tnl_locate - find or create tunnel matching given parameters + * ip6_tnl_locate - find or create tunnel matching given parameters * @p: tunnel parameters * @create: != 0 if allowed to create new tunnel if no match found * * Description: - * ip6ip6_tnl_locate() first tries to locate an existing tunnel + * ip6_tnl_locate() first tries to locate an existing tunnel * based on @parms. If this is unsuccessful, but @create is set a new * tunnel device is created and registered for use. * @@ -270,13 +274,13 @@ failed: * matching tunnel or NULL **/ -static struct ip6_tnl *ip6ip6_tnl_locate(struct ip6_tnl_parm *p, int create) +static struct ip6_tnl *ip6_tnl_locate(struct ip6_tnl_parm *p, int create) { struct in6_addr *remote = &p->raddr; struct in6_addr *local = &p->laddr; struct ip6_tnl *t; - for (t = *ip6ip6_bucket(p); t; t = t->next) { + for (t = *ip6_tnl_bucket(p); t; t = t->next) { if (ipv6_addr_equal(local, &t->parms.laddr) && ipv6_addr_equal(remote, &t->parms.raddr)) return t; @@ -287,24 +291,24 @@ static struct ip6_tnl *ip6ip6_tnl_locate } /** - * ip6ip6_tnl_dev_uninit - tunnel device uninitializer + * ip6_tnl_dev_uninit - tunnel device uninitializer * @dev: the device to be destroyed * * Description: - * ip6ip6_tnl_dev_uninit() removes tunnel from its list + * ip6_tnl_dev_uninit() removes tunnel from its list **/ static void -ip6ip6_tnl_dev_uninit(struct net_device *dev) +ip6_tnl_dev_uninit(struct net_device *dev) { struct ip6_tnl *t = netdev_priv(dev); - if (dev == ip6ip6_fb_tnl_dev) { - write_lock_bh(&ip6ip6_lock); + if (dev == ip6_fb_tnl_dev) { + write_lock_bh(&ip6_tnl_lock); tnls_wc[0] = NULL; - write_unlock_bh(&ip6ip6_lock); + write_unlock_bh(&ip6_tnl_lock); } else { - ip6ip6_tnl_unlink(t); + ip6_tnl_unlink(t); } ip6_tnl_dst_reset(t); dev_put(dev); @@ -372,16 +376,16 @@ parse_tlv_tnl_enc_lim(struct sk_buff *sk } /** - * ip6ip6_err - tunnel error handler + * ip6_tnl_err - tunnel error handler * * Description: - * ip6ip6_err() should handle errors in the tunnel according + * ip6_tnl_err() should handle errors in the tunnel according * to the specifications in RFC 2473. **/ static int -ip6ip6_err(struct sk_buff *skb, struct inet6_skb_parm *opt, - int type, int code, int offset, __be32 info) +ip6_tnl_err(struct sk_buff *skb, __u8 ipproto, struct inet6_skb_parm *opt, + int *type, int *code, int *msg, __be32 *info, int offset) { struct ipv6hdr *ipv6h = (struct ipv6hdr *) skb->data; struct ip6_tnl *t; @@ -396,13 +400,16 @@ ip6ip6_err(struct sk_buff *skb, struct i in trouble since we might need the source address for further processing of the error. */ - read_lock(&ip6ip6_lock); - if ((t = ip6ip6_tnl_lookup(&ipv6h->daddr, &ipv6h->saddr)) == NULL) + read_lock(&ip6_tnl_lock); + if ((t = ip6_tnl_lookup(&ipv6h->daddr, &ipv6h->saddr)) == NULL) + goto out; + + if (t->parms.proto != ipproto && t->parms.proto != 0) goto out; err = 0; - switch (type) { + switch (*type) { __u32 teli; struct ipv6_tlv_tnl_enc_lim *tel; __u32 mtu; @@ -414,7 +421,7 @@ ip6ip6_err(struct sk_buff *skb, struct i rel_msg = 1; break; case ICMPV6_TIME_EXCEED: - if (code == ICMPV6_EXC_HOPLIMIT) { + if ((*code) == ICMPV6_EXC_HOPLIMIT) { if (net_ratelimit()) printk(KERN_WARNING "%s: Too small hop limit or " @@ -425,10 +432,10 @@ ip6ip6_err(struct sk_buff *skb, struct i break; case ICMPV6_PARAMPROB: teli = 0; - if (code == ICMPV6_HDR_FIELD) + if ((*code) == ICMPV6_HDR_FIELD) teli = parse_tlv_tnl_enc_lim(skb, skb->data); - if (teli && teli == ntohl(info) - 2) { + if (teli && teli == ntohl(*info) - 2) { tel = (struct ipv6_tlv_tnl_enc_lim *) &skb->data[teli]; if (tel->encap_limit == 0) { if (net_ratelimit()) @@ -445,7 +452,7 @@ ip6ip6_err(struct sk_buff *skb, struct i } break; case ICMPV6_PKT_TOOBIG: - mtu = ntohl(info) - offset; + mtu = ntohl(*info) - offset; if (mtu < IPV6_MIN_MTU) mtu = IPV6_MIN_MTU; t->dev->mtu = mtu; @@ -458,20 +465,144 @@ ip6ip6_err(struct sk_buff *skb, struct i } break; } - if (rel_msg && pskb_may_pull(skb, offset + sizeof (*ipv6h))) { + + *type = rel_type; + *code = rel_code; + *info = rel_info; + *msg = rel_msg; + +out: + read_unlock(&ip6_tnl_lock); + return err; +} + +static int +ip4ip6_err(struct sk_buff *skb, struct inet6_skb_parm *opt, + int type, int code, int offset, __u32 info) +{ + int rel_msg = 0; + int rel_type = type; + int rel_code = code; + __u32 rel_info = info; + int err; + struct sk_buff *skb2; + struct iphdr *eiph; + struct flowi fl; + struct rtable *rt; + + err = ip6_tnl_err(skb, IPPROTO_IPIP, opt, &rel_type, &rel_code, + &rel_msg, &rel_info, offset); + if (err < 0) + return err; + + if (rel_msg == 0) + return 0; + + switch (rel_type) { + case ICMPV6_DEST_UNREACH: + if (rel_code != ICMPV6_ADDR_UNREACH) + return 0; + rel_type = ICMP_DEST_UNREACH; + rel_code = ICMP_HOST_UNREACH; + break; + case ICMPV6_PKT_TOOBIG: + if (rel_code != 0) + return 0; + rel_type = ICMP_DEST_UNREACH; + rel_code = ICMP_FRAG_NEEDED; + break; + default: + return 0; + } + + if (!pskb_may_pull(skb, offset + sizeof(struct iphdr))) + return 0; + + skb2 = skb_clone(skb, GFP_ATOMIC); + if (!skb2) + return 0; + + dst_release(skb2->dst); + skb2->dst = NULL; + skb_pull(skb2, offset); + skb_reset_network_header(skb2); + eiph = ip_hdr(skb2); + + /* Try to guess incoming interface */ + memset(&fl, 0, sizeof(fl)); + fl.fl4_dst = eiph->saddr; + fl.fl4_tos = RT_TOS(eiph->tos); + fl.proto = IPPROTO_IPIP; + if (ip_route_output_key(&rt, &fl)) + goto out; + + skb2->dev = rt->u.dst.dev; + + /* route "incoming" packet */ + if (rt->rt_flags & RTCF_LOCAL) { + ip_rt_put(rt); + rt = NULL; + fl.fl4_dst = eiph->daddr; + fl.fl4_src = eiph->saddr; + fl.fl4_tos = eiph->tos; + if (ip_route_output_key(&rt, &fl) || + rt->u.dst.dev->type != ARPHRD_TUNNEL) { + ip_rt_put(rt); + goto out; + } + } else { + ip_rt_put(rt); + if (ip_route_input(skb2, eiph->daddr, eiph->saddr, eiph->tos, + skb2->dev) || + skb2->dst->dev->type != ARPHRD_TUNNEL) + goto out; + } + + /* change mtu on this route */ + if (rel_type == ICMP_DEST_UNREACH && rel_code == ICMP_FRAG_NEEDED) { + if (rel_info > dst_mtu(skb2->dst)) + goto out; + + skb2->dst->ops->update_pmtu(skb2->dst, rel_info); + rel_info = htonl(rel_info); + } + + icmp_send(skb2, rel_type, rel_code, rel_info); + +out: + kfree_skb(skb2); + return 0; +} + +static int +ip6ip6_err(struct sk_buff *skb, struct inet6_skb_parm *opt, + int type, int code, int offset, __u32 info) +{ + int rel_msg = 0; + int rel_type = type; + int rel_code = code; + __u32 rel_info = info; + int err; + + err = ip6_tnl_err(skb, IPPROTO_IPV6, opt, &rel_type, &rel_code, + &rel_msg, &rel_info, offset); + if (err < 0) + return err; + + if (rel_msg && pskb_may_pull(skb, offset + sizeof(struct ipv6hdr))) { struct rt6_info *rt; struct sk_buff *skb2 = skb_clone(skb, GFP_ATOMIC); if (!skb2) - goto out; + return 0; dst_release(skb2->dst); skb2->dst = NULL; skb_pull(skb2, offset); - skb2->nh.raw = skb2->data; + skb_reset_network_header(skb2); /* Try to guess incoming interface */ - rt = rt6_lookup(&skb2->nh.ipv6h->saddr, NULL, 0, 0); + rt = rt6_lookup(&ipv6_hdr(skb2)->saddr, NULL, 0, 0); if (rt && rt->rt6i_dev) skb2->dev = rt->rt6i_dev; @@ -483,19 +614,34 @@ ip6ip6_err(struct sk_buff *skb, struct i kfree_skb(skb2); } -out: - read_unlock(&ip6ip6_lock); - return err; + + return 0; } -static inline void ip6ip6_ecn_decapsulate(struct ipv6hdr *outer_iph, - struct sk_buff *skb) +static void ip4ip6_dscp_ecn_decapsulate(struct ip6_tnl *t, + struct ipv6hdr *ipv6h, + struct sk_buff *skb) { - struct ipv6hdr *inner_iph = skb->nh.ipv6h; + __u8 dsfield = ipv6_get_dsfield(ipv6h) & ~INET_ECN_MASK; - if (INET_ECN_is_ce(ipv6_get_dsfield(outer_iph))) - IP6_ECN_set_ce(inner_iph); + if (t->parms.flags & IP6_TNL_F_RCV_DSCP_COPY) + ipv4_change_dsfield(ip_hdr(skb), INET_ECN_MASK, dsfield); + + if (INET_ECN_is_ce(dsfield)) + IP_ECN_set_ce(ip_hdr(skb)); +} + +static void ip6ip6_dscp_ecn_decapsulate(struct ip6_tnl *t, + struct ipv6hdr *ipv6h, + struct sk_buff *skb) +{ + if (t->parms.flags & IP6_TNL_F_RCV_DSCP_COPY) + ipv6_copy_dscp(ipv6h, ipv6_hdr(skb)); + + if (INET_ECN_is_ce(ipv6_get_dsfield(ipv6h))) + IP6_ECN_set_ce(ipv6_hdr(skb)); } + static inline int ip6_tnl_rcv_ctl(struct ip6_tnl *t) { struct ip6_tnl_parm *p = &t->parms; @@ -519,53 +665,61 @@ static inline int ip6_tnl_rcv_ctl(struct } /** - * ip6ip6_rcv - decapsulate IPv6 packet and retransmit it locally + * ip6_tnl_rcv - decapsulate IPv6 packet and retransmit it locally * @skb: received socket buffer + * @protocol: ethernet protocol ID + * @dscp_ecn_decapsulate: the function to decapsulate DSCP code and ECN * * Return: 0 **/ -static int -ip6ip6_rcv(struct sk_buff *skb) +static int ip6_tnl_rcv(struct sk_buff *skb, __u16 protocol, + __u8 ipproto, + void (*dscp_ecn_decapsulate)(struct ip6_tnl *t, + struct ipv6hdr *ipv6h, + struct sk_buff *skb)) { - struct ipv6hdr *ipv6h; struct ip6_tnl *t; + struct ipv6hdr *ipv6h = ipv6_hdr(skb); - ipv6h = skb->nh.ipv6h; + read_lock(&ip6_tnl_lock); - read_lock(&ip6ip6_lock); + if ((t = ip6_tnl_lookup(&ipv6h->saddr, &ipv6h->daddr)) != NULL) { + if (t->parms.proto != ipproto && t->parms.proto != 0) { + read_unlock(&ip6_tnl_lock); + goto discard; + } - if ((t = ip6ip6_tnl_lookup(&ipv6h->saddr, &ipv6h->daddr)) != NULL) { if (!xfrm6_policy_check(NULL, XFRM_POLICY_IN, skb)) { - read_unlock(&ip6ip6_lock); + read_unlock(&ip6_tnl_lock); goto discard; } if (!ip6_tnl_rcv_ctl(t)) { t->stat.rx_dropped++; - read_unlock(&ip6ip6_lock); + read_unlock(&ip6_tnl_lock); goto discard; } secpath_reset(skb); - skb->mac.raw = skb->nh.raw; - skb->nh.raw = skb->data; - skb->protocol = htons(ETH_P_IPV6); + skb->mac_header = skb->network_header; + skb_reset_network_header(skb); + skb->protocol = htons(protocol); skb->pkt_type = PACKET_HOST; memset(skb->cb, 0, sizeof(struct inet6_skb_parm)); skb->dev = t->dev; dst_release(skb->dst); skb->dst = NULL; nf_reset(skb); - if (t->parms.flags & IP6_TNL_F_RCV_DSCP_COPY) - ipv6_copy_dscp(ipv6h, skb->nh.ipv6h); - ip6ip6_ecn_decapsulate(ipv6h, skb); + + dscp_ecn_decapsulate(t, ipv6h, skb); + t->stat.rx_packets++; t->stat.rx_bytes += skb->len; netif_rx(skb); - read_unlock(&ip6ip6_lock); + read_unlock(&ip6_tnl_lock); return 0; } - read_unlock(&ip6ip6_lock); + read_unlock(&ip6_tnl_lock); return 1; discard: @@ -573,6 +727,18 @@ discard: return 0; } +static int ip4ip6_rcv(struct sk_buff *skb) +{ + return ip6_tnl_rcv(skb, ETH_P_IP, IPPROTO_IPIP, + ip4ip6_dscp_ecn_decapsulate); +} + +static int ip6ip6_rcv(struct sk_buff *skb) +{ + return ip6_tnl_rcv(skb, ETH_P_IPV6, IPPROTO_IPV6, + ip6ip6_dscp_ecn_decapsulate); +} + struct ipv6_tel_txoption { struct ipv6_txoptions ops; __u8 dst_opt[8]; @@ -593,7 +759,7 @@ static void init_tel_txopt(struct ipv6_t } /** - * ip6ip6_tnl_addr_conflict - compare packet addresses to tunnel's own + * ip6_tnl_addr_conflict - compare packet addresses to tunnel's own * @t: the outgoing tunnel device * @hdr: IPv6 header from the incoming packet * @@ -607,7 +773,7 @@ static void init_tel_txopt(struct ipv6_t **/ static inline int -ip6ip6_tnl_addr_conflict(struct ip6_tnl *t, struct ipv6hdr *hdr) +ip6_tnl_addr_conflict(struct ip6_tnl *t, struct ipv6hdr *hdr) { return ipv6_addr_equal(&t->parms.raddr, &hdr->saddr); } @@ -641,72 +807,49 @@ static inline int ip6_tnl_xmit_ctl(struc return ret; } /** - * ip6ip6_tnl_xmit - encapsulate packet and send + * ip6_tnl_xmit2 - encapsulate packet and send * @skb: the outgoing socket buffer * @dev: the outgoing tunnel device + * @dsfield: dscp code for outer header + * @fl: flow of tunneled packet + * @encap_limit: encapsulation limit + * @pmtu: Path MTU is stored if packet is too big * * Description: * Build new header and do some sanity checks on the packet before sending * it. * * Return: - * 0 + * 0 on success + * -1 fail + * %-EMSGSIZE message too big. return mtu in this case. **/ -static int -ip6ip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev) +static int ip6_tnl_xmit2(struct sk_buff *skb, + struct net_device *dev, + __u8 dsfield, + struct flowi *fl, + int encap_limit, + __u32 *pmtu) { struct ip6_tnl *t = netdev_priv(dev); struct net_device_stats *stats = &t->stat; - struct ipv6hdr *ipv6h = skb->nh.ipv6h; - int encap_limit = -1; + struct ipv6hdr *ipv6h = ipv6_hdr(skb); struct ipv6_tel_txoption opt; - __u16 offset; - struct flowi fl; struct dst_entry *dst; struct net_device *tdev; int mtu; int max_headroom = sizeof(struct ipv6hdr); u8 proto; - int err; + int err = -1; int pkt_len; - int dsfield; - - if (t->recursion++) { - stats->collisions++; - goto tx_err; - } - if (skb->protocol != htons(ETH_P_IPV6) || - !ip6_tnl_xmit_ctl(t) || ip6ip6_tnl_addr_conflict(t, ipv6h)) - goto tx_err; - - if ((offset = parse_tlv_tnl_enc_lim(skb, skb->nh.raw)) > 0) { - struct ipv6_tlv_tnl_enc_lim *tel; - tel = (struct ipv6_tlv_tnl_enc_lim *) &skb->nh.raw[offset]; - if (tel->encap_limit == 0) { - icmpv6_send(skb, ICMPV6_PARAMPROB, - ICMPV6_HDR_FIELD, offset + 2, skb->dev); - goto tx_err; - } - encap_limit = tel->encap_limit - 1; - } else if (!(t->parms.flags & IP6_TNL_F_IGN_ENCAP_LIMIT)) - encap_limit = t->parms.encap_limit; - - memcpy(&fl, &t->fl, sizeof (fl)); - proto = fl.proto; - - dsfield = ipv6_get_dsfield(ipv6h); - if ((t->parms.flags & IP6_TNL_F_USE_ORIG_TCLASS)) - fl.fl6_flowlabel |= (*(__be32 *) ipv6h & IPV6_TCLASS_MASK); - if ((t->parms.flags & IP6_TNL_F_USE_ORIG_FLOWLABEL)) - fl.fl6_flowlabel |= (*(__be32 *) ipv6h & IPV6_FLOWLABEL_MASK); if ((dst = ip6_tnl_dst_check(t)) != NULL) dst_hold(dst); else { - dst = ip6_route_output(NULL, &fl); + dst = ip6_route_output(NULL, fl); - if (dst->error || xfrm_lookup(&dst, &fl, NULL, 0) < 0) + if (dst->error || xfrm_lookup(&dst, fl, NULL, 0) < 0) goto tx_err_link_failure; } @@ -730,7 +873,8 @@ ip6ip6_tnl_xmit(struct sk_buff *skb, str if (skb->dst) skb->dst->ops->update_pmtu(skb->dst, mtu); if (skb->len > mtu) { - icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu, dev); + *pmtu = mtu; + err = -EMSGSIZE; goto tx_err_dst_release; } @@ -754,22 +898,24 @@ ip6ip6_tnl_xmit(struct sk_buff *skb, str dst_release(skb->dst); skb->dst = dst_clone(dst); - skb->h.raw = skb->nh.raw; + skb->transport_header = skb->network_header; + proto = fl->proto; if (encap_limit >= 0) { init_tel_txopt(&opt, encap_limit); ipv6_push_nfrag_opts(skb, &opt.ops, &proto, NULL); } - skb->nh.raw = skb_push(skb, sizeof(struct ipv6hdr)); - ipv6h = skb->nh.ipv6h; - *(__be32*)ipv6h = fl.fl6_flowlabel | htonl(0x60000000); + skb_push(skb, sizeof(struct ipv6hdr)); + skb_reset_network_header(skb); + ipv6h = ipv6_hdr(skb); + *(__be32*)ipv6h = fl->fl6_flowlabel | htonl(0x60000000); dsfield = INET_ECN_encapsulate(0, dsfield); ipv6_change_dsfield(ipv6h, ~INET_ECN_MASK, dsfield); ipv6h->payload_len = htons(skb->len - sizeof(struct ipv6hdr)); ipv6h->hop_limit = t->parms.hop_limit; ipv6h->nexthdr = proto; - ipv6_addr_copy(&ipv6h->saddr, &fl.fl6_src); - ipv6_addr_copy(&ipv6h->daddr, &fl.fl6_dst); + ipv6_addr_copy(&ipv6h->saddr, &fl->fl6_src); + ipv6_addr_copy(&ipv6h->daddr, &fl->fl6_dst); nf_reset(skb); pkt_len = skb->len; err = NF_HOOK(PF_INET6, NF_IP6_LOCAL_OUT, skb, NULL, @@ -783,13 +929,131 @@ ip6ip6_tnl_xmit(struct sk_buff *skb, str stats->tx_aborted_errors++; } ip6_tnl_dst_store(t, dst); - t->recursion--; return 0; tx_err_link_failure: stats->tx_carrier_errors++; dst_link_failure(skb); tx_err_dst_release: dst_release(dst); + return err; +} + +static inline int +ip4ip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev) +{ + struct ip6_tnl *t = netdev_priv(dev); + struct iphdr *iph = ip_hdr(skb); + int encap_limit = -1; + struct flowi fl; + __u8 dsfield; + __u32 mtu; + int err; + + if ((t->parms.proto != IPPROTO_IPIP && t->parms.proto != 0) || + !ip6_tnl_xmit_ctl(t)) + return -1; + + if (!(t->parms.flags & IP6_TNL_F_IGN_ENCAP_LIMIT)) + encap_limit = t->parms.encap_limit; + + memcpy(&fl, &t->fl, sizeof (fl)); + fl.proto = IPPROTO_IPIP; + + dsfield = ipv4_get_dsfield(iph); + + if ((t->parms.flags & IP6_TNL_F_USE_ORIG_TCLASS)) + fl.fl6_flowlabel |= ntohl(((__u32)iph->tos << IPV6_TCLASS_SHIFT) + & IPV6_TCLASS_MASK); + + err = ip6_tnl_xmit2(skb, dev, dsfield, &fl, encap_limit, &mtu); + if (err != 0) { + /* XXX: send ICMP error even if DF is not set. */ + if (err == -EMSGSIZE) + icmp_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED, + htonl(mtu)); + return -1; + } + + return 0; +} + +static inline int +ip6ip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev) +{ + struct ip6_tnl *t = netdev_priv(dev); + struct ipv6hdr *ipv6h = ipv6_hdr(skb); + int encap_limit = -1; + __u16 offset; + struct flowi fl; + __u8 dsfield; + __u32 mtu; + int err; + + if ((t->parms.proto != IPPROTO_IPV6 && t->parms.proto != 0) || + !ip6_tnl_xmit_ctl(t) || ip6_tnl_addr_conflict(t, ipv6h)) + return -1; + + offset = parse_tlv_tnl_enc_lim(skb, skb_network_header(skb)); + if (offset > 0) { + struct ipv6_tlv_tnl_enc_lim *tel; + tel = (struct ipv6_tlv_tnl_enc_lim *)&skb_network_header(skb)[offset]; + if (tel->encap_limit == 0) { + icmpv6_send(skb, ICMPV6_PARAMPROB, + ICMPV6_HDR_FIELD, offset + 2, skb->dev); + return -1; + } + encap_limit = tel->encap_limit - 1; + } else if (!(t->parms.flags & IP6_TNL_F_IGN_ENCAP_LIMIT)) + encap_limit = t->parms.encap_limit; + + memcpy(&fl, &t->fl, sizeof (fl)); + fl.proto = IPPROTO_IPV6; + + dsfield = ipv6_get_dsfield(ipv6h); + if ((t->parms.flags & IP6_TNL_F_USE_ORIG_TCLASS)) + fl.fl6_flowlabel |= (*(__be32 *) ipv6h & IPV6_TCLASS_MASK); + if ((t->parms.flags & IP6_TNL_F_USE_ORIG_FLOWLABEL)) + fl.fl6_flowlabel |= (*(__be32 *) ipv6h & IPV6_FLOWLABEL_MASK); + + err = ip6_tnl_xmit2(skb, dev, dsfield, &fl, encap_limit, &mtu); + if (err != 0) { + if (err == -EMSGSIZE) + icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu, dev); + return -1; + } + + return 0; +} + +static int +ip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev) +{ + struct ip6_tnl *t = netdev_priv(dev); + struct net_device_stats *stats = &t->stat; + int ret; + + if (t->recursion++) { + t->stat.collisions++; + goto tx_err; + } + + switch (skb->protocol) { + case __constant_htons(ETH_P_IP): + ret = ip4ip6_tnl_xmit(skb, dev); + break; + case __constant_htons(ETH_P_IPV6): + ret = ip6ip6_tnl_xmit(skb, dev); + break; + default: + goto tx_err; + } + + if (ret < 0) + goto tx_err; + + t->recursion--; + return 0; + tx_err: stats->tx_errors++; stats->tx_dropped++; @@ -817,7 +1081,7 @@ static void ip6_tnl_set_cap(struct ip6_t } } -static void ip6ip6_tnl_link_config(struct ip6_tnl *t) +static void ip6_tnl_link_config(struct ip6_tnl *t) { struct net_device *dev = t->dev; struct ip6_tnl_parm *p = &t->parms; @@ -870,17 +1134,17 @@ static void ip6ip6_tnl_link_config(struc } /** - * ip6ip6_tnl_change - update the tunnel parameters + * ip6_tnl_change - update the tunnel parameters * @t: tunnel to be changed * @p: tunnel configuration parameters * @active: != 0 if tunnel is ready for use * * Description: - * ip6ip6_tnl_change() updates the tunnel parameters + * ip6_tnl_change() updates the tunnel parameters **/ static int -ip6ip6_tnl_change(struct ip6_tnl *t, struct ip6_tnl_parm *p) +ip6_tnl_change(struct ip6_tnl *t, struct ip6_tnl_parm *p) { ipv6_addr_copy(&t->parms.laddr, &p->laddr); ipv6_addr_copy(&t->parms.raddr, &p->raddr); @@ -889,19 +1153,20 @@ ip6ip6_tnl_change(struct ip6_tnl *t, str t->parms.encap_limit = p->encap_limit; t->parms.flowinfo = p->flowinfo; t->parms.link = p->link; + t->parms.proto = p->proto; ip6_tnl_dst_reset(t); - ip6ip6_tnl_link_config(t); + ip6_tnl_link_config(t); return 0; } /** - * ip6ip6_tnl_ioctl - configure ipv6 tunnels from userspace + * ip6_tnl_ioctl - configure ipv6 tunnels from userspace * @dev: virtual device associated with tunnel * @ifr: parameters passed from userspace * @cmd: command to be performed * * Description: - * ip6ip6_tnl_ioctl() is used for managing IPv6 tunnels + * ip6_tnl_ioctl() is used for managing IPv6 tunnels * from userspace. * * The possible commands are the following: @@ -923,7 +1188,7 @@ ip6ip6_tnl_change(struct ip6_tnl *t, str **/ static int -ip6ip6_tnl_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd) +ip6_tnl_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd) { int err = 0; struct ip6_tnl_parm p; @@ -931,12 +1196,12 @@ ip6ip6_tnl_ioctl(struct net_device *dev, switch (cmd) { case SIOCGETTUNNEL: - if (dev == ip6ip6_fb_tnl_dev) { + if (dev == ip6_fb_tnl_dev) { if (copy_from_user(&p, ifr->ifr_ifru.ifru_data, sizeof (p))) { err = -EFAULT; break; } - t = ip6ip6_tnl_locate(&p, 0); + t = ip6_tnl_locate(&p, 0); } if (t == NULL) t = netdev_priv(dev); @@ -954,10 +1219,11 @@ ip6ip6_tnl_ioctl(struct net_device *dev, if (copy_from_user(&p, ifr->ifr_ifru.ifru_data, sizeof (p))) break; err = -EINVAL; - if (p.proto != IPPROTO_IPV6) + if (p.proto != IPPROTO_IPV6 && p.proto != IPPROTO_IPIP && + p.proto != 0) break; - t = ip6ip6_tnl_locate(&p, cmd == SIOCADDTUNNEL); - if (dev != ip6ip6_fb_tnl_dev && cmd == SIOCCHGTUNNEL) { + t = ip6_tnl_locate(&p, cmd == SIOCADDTUNNEL); + if (dev != ip6_fb_tnl_dev && cmd == SIOCCHGTUNNEL) { if (t != NULL) { if (t->dev != dev) { err = -EEXIST; @@ -966,9 +1232,9 @@ ip6ip6_tnl_ioctl(struct net_device *dev, } else t = netdev_priv(dev); - ip6ip6_tnl_unlink(t); - err = ip6ip6_tnl_change(t, &p); - ip6ip6_tnl_link(t); + ip6_tnl_unlink(t); + err = ip6_tnl_change(t, &p); + ip6_tnl_link(t); netdev_state_change(dev); } if (t) { @@ -984,15 +1250,15 @@ ip6ip6_tnl_ioctl(struct net_device *dev, if (!capable(CAP_NET_ADMIN)) break; - if (dev == ip6ip6_fb_tnl_dev) { + if (dev == ip6_fb_tnl_dev) { err = -EFAULT; if (copy_from_user(&p, ifr->ifr_ifru.ifru_data, sizeof (p))) break; err = -ENOENT; - if ((t = ip6ip6_tnl_locate(&p, 0)) == NULL) + if ((t = ip6_tnl_locate(&p, 0)) == NULL) break; err = -EPERM; - if (t->dev == ip6ip6_fb_tnl_dev) + if (t->dev == ip6_fb_tnl_dev) break; dev = t->dev; } @@ -1006,20 +1272,20 @@ ip6ip6_tnl_ioctl(struct net_device *dev, } /** - * ip6ip6_tnl_get_stats - return the stats for tunnel device + * ip6_tnl_get_stats - return the stats for tunnel device * @dev: virtual device associated with tunnel * * Return: stats for device **/ static struct net_device_stats * -ip6ip6_tnl_get_stats(struct net_device *dev) +ip6_tnl_get_stats(struct net_device *dev) { return &(((struct ip6_tnl *)netdev_priv(dev))->stat); } /** - * ip6ip6_tnl_change_mtu - change mtu manually for tunnel device + * ip6_tnl_change_mtu - change mtu manually for tunnel device * @dev: virtual device associated with tunnel * @new_mtu: the new mtu * @@ -1029,7 +1295,7 @@ ip6ip6_tnl_get_stats(struct net_device * **/ static int -ip6ip6_tnl_change_mtu(struct net_device *dev, int new_mtu) +ip6_tnl_change_mtu(struct net_device *dev, int new_mtu) { if (new_mtu < IPV6_MIN_MTU) { return -EINVAL; @@ -1039,22 +1305,22 @@ ip6ip6_tnl_change_mtu(struct net_device } /** - * ip6ip6_tnl_dev_setup - setup virtual tunnel device + * ip6_tnl_dev_setup - setup virtual tunnel device * @dev: virtual device associated with tunnel * * Description: * Initialize function pointers and device parameters **/ -static void ip6ip6_tnl_dev_setup(struct net_device *dev) +static void ip6_tnl_dev_setup(struct net_device *dev) { SET_MODULE_OWNER(dev); - dev->uninit = ip6ip6_tnl_dev_uninit; + dev->uninit = ip6_tnl_dev_uninit; dev->destructor = free_netdev; - dev->hard_start_xmit = ip6ip6_tnl_xmit; - dev->get_stats = ip6ip6_tnl_get_stats; - dev->do_ioctl = ip6ip6_tnl_ioctl; - dev->change_mtu = ip6ip6_tnl_change_mtu; + dev->hard_start_xmit = ip6_tnl_xmit; + dev->get_stats = ip6_tnl_get_stats; + dev->do_ioctl = ip6_tnl_ioctl; + dev->change_mtu = ip6_tnl_change_mtu; dev->type = ARPHRD_TUNNEL6; dev->hard_header_len = LL_MAX_HEADER + sizeof (struct ipv6hdr); @@ -1065,50 +1331,56 @@ static void ip6ip6_tnl_dev_setup(struct /** - * ip6ip6_tnl_dev_init_gen - general initializer for all tunnel devices + * ip6_tnl_dev_init_gen - general initializer for all tunnel devices * @dev: virtual device associated with tunnel **/ static inline void -ip6ip6_tnl_dev_init_gen(struct net_device *dev) +ip6_tnl_dev_init_gen(struct net_device *dev) { struct ip6_tnl *t = netdev_priv(dev); - t->fl.proto = IPPROTO_IPV6; t->dev = dev; strcpy(t->parms.name, dev->name); } /** - * ip6ip6_tnl_dev_init - initializer for all non fallback tunnel devices + * ip6_tnl_dev_init - initializer for all non fallback tunnel devices * @dev: virtual device associated with tunnel **/ static int -ip6ip6_tnl_dev_init(struct net_device *dev) +ip6_tnl_dev_init(struct net_device *dev) { struct ip6_tnl *t = netdev_priv(dev); - ip6ip6_tnl_dev_init_gen(dev); - ip6ip6_tnl_link_config(t); + ip6_tnl_dev_init_gen(dev); + ip6_tnl_link_config(t); return 0; } /** - * ip6ip6_fb_tnl_dev_init - initializer for fallback tunnel device + * ip6_fb_tnl_dev_init - initializer for fallback tunnel device * @dev: fallback device * * Return: 0 **/ static int -ip6ip6_fb_tnl_dev_init(struct net_device *dev) +ip6_fb_tnl_dev_init(struct net_device *dev) { struct ip6_tnl *t = netdev_priv(dev); - ip6ip6_tnl_dev_init_gen(dev); + ip6_tnl_dev_init_gen(dev); + t->parms.proto = IPPROTO_IPV6; dev_hold(dev); tnls_wc[0] = t; return 0; } +static struct xfrm6_tunnel ip4ip6_handler = { + .handler = ip4ip6_rcv, + .err_handler = ip4ip6_err, + .priority = 1, +}; + static struct xfrm6_tunnel ip6ip6_handler = { .handler = ip6ip6_rcv, .err_handler = ip6ip6_err, @@ -1125,30 +1397,40 @@ static int __init ip6_tunnel_init(void) { int err; + if (xfrm6_tunnel_register(&ip4ip6_handler, AF_INET)) { + printk(KERN_ERR "ip6_tunnel init: can't register ip4ip6\n"); + err = -EAGAIN; + goto out; + } + if (xfrm6_tunnel_register(&ip6ip6_handler, AF_INET6)) { - printk(KERN_ERR "ip6ip6 init: can't register tunnel\n"); - return -EAGAIN; + printk(KERN_ERR "ip6_tunnel init: can't register ip6ip6\n"); + err = -EAGAIN; + goto unreg_ip4ip6; } - ip6ip6_fb_tnl_dev = alloc_netdev(sizeof(struct ip6_tnl), "ip6tnl0", - ip6ip6_tnl_dev_setup); + ip6_fb_tnl_dev = alloc_netdev(sizeof(struct ip6_tnl), "ip6tnl0", + ip6_tnl_dev_setup); - if (!ip6ip6_fb_tnl_dev) { + if (!ip6_fb_tnl_dev) { err = -ENOMEM; goto fail; } - ip6ip6_fb_tnl_dev->init = ip6ip6_fb_tnl_dev_init; + ip6_fb_tnl_dev->init = ip6_fb_tnl_dev_init; - if ((err = register_netdev(ip6ip6_fb_tnl_dev))) { - free_netdev(ip6ip6_fb_tnl_dev); + if ((err = register_netdev(ip6_fb_tnl_dev))) { + free_netdev(ip6_fb_tnl_dev); goto fail; } return 0; fail: xfrm6_tunnel_deregister(&ip6ip6_handler, AF_INET6); +unreg_ip4ip6: + xfrm6_tunnel_deregister(&ip4ip6_handler, AF_INET); +out: return err; } -static void __exit ip6ip6_destroy_tunnels(void) +static void __exit ip6_tnl_destroy_tunnels(void) { int h; struct ip6_tnl *t; @@ -1168,11 +1450,14 @@ static void __exit ip6ip6_destroy_tunnel static void __exit ip6_tunnel_cleanup(void) { + if (xfrm6_tunnel_deregister(&ip4ip6_handler, AF_INET)) + printk(KERN_INFO "ip6_tunnel close: can't deregister ip4ip6\n"); + if (xfrm6_tunnel_deregister(&ip6ip6_handler, AF_INET6)) - printk(KERN_INFO "ip6ip6 close: can't deregister tunnel\n"); + printk(KERN_INFO "ip6_tunnel close: can't deregister ip6ip6\n"); rtnl_lock(); - ip6ip6_destroy_tunnels(); + ip6_tnl_destroy_tunnels(); rtnl_unlock(); } diff -puN net/ipv6/ipcomp6.c~git-net net/ipv6/ipcomp6.c --- a/net/ipv6/ipcomp6.c~git-net +++ a/net/ipv6/ipcomp6.c @@ -79,9 +79,9 @@ static int ipcomp6_input(struct xfrm_sta skb->ip_summed = CHECKSUM_NONE; /* Remove ipcomp header and decompress original payload */ - iph = skb->nh.ipv6h; + iph = ipv6_hdr(skb); ipch = (void *)skb->data; - skb->h.raw = skb->nh.raw + sizeof(*ipch); + skb->transport_header = skb->network_header + sizeof(*ipch); __skb_pull(skb, sizeof(*ipch)); /* decompression */ @@ -111,7 +111,7 @@ static int ipcomp6_input(struct xfrm_sta skb->truesize += dlen - plen; __skb_put(skb, dlen - plen); - memcpy(skb->data, scratch, dlen); + skb_copy_to_linear_data(skb, scratch, dlen); err = ipch->nexthdr; out_put_cpu: @@ -124,15 +124,13 @@ static int ipcomp6_output(struct xfrm_st { int err; struct ipv6hdr *top_iph; - int hdr_len; struct ipv6_comp_hdr *ipch; struct ipcomp_data *ipcd = x->data; int plen, dlen; u8 *start, *scratch; struct crypto_comp *tfm; int cpu; - - hdr_len = skb->h.raw - skb->data; + int hdr_len = skb_transport_offset(skb); /* check whether datagram len is larger than threshold */ if ((skb->len - hdr_len) < ipcd->threshold) { @@ -145,7 +143,7 @@ static int ipcomp6_output(struct xfrm_st /* compression */ plen = skb->len - hdr_len; dlen = IPCOMP_SCRATCH_SIZE; - start = skb->h.raw; + start = skb_transport_header(skb); cpu = get_cpu(); scratch = *per_cpu_ptr(ipcomp6_scratches, cpu); @@ -166,10 +164,10 @@ static int ipcomp6_output(struct xfrm_st top_iph->payload_len = htons(skb->len - sizeof(struct ipv6hdr)); ipch = (struct ipv6_comp_hdr *)start; - ipch->nexthdr = *skb->nh.raw; + ipch->nexthdr = *skb_network_header(skb); ipch->flags = 0; ipch->cpi = htons((u16 )ntohl(x->id.spi)); - *skb->nh.raw = IPPROTO_COMP; + *skb_network_header(skb) = IPPROTO_COMP; out_ok: return 0; diff -puN net/ipv6/ipv6_sockglue.c~git-net net/ipv6/ipv6_sockglue.c --- a/net/ipv6/ipv6_sockglue.c~git-net +++ a/net/ipv6/ipv6_sockglue.c @@ -101,14 +101,14 @@ static int ipv6_gso_send_check(struct sk if (unlikely(!pskb_may_pull(skb, sizeof(*ipv6h)))) goto out; - ipv6h = skb->nh.ipv6h; + ipv6h = ipv6_hdr(skb); __skb_pull(skb, sizeof(*ipv6h)); err = -EPROTONOSUPPORT; rcu_read_lock(); ops = ipv6_gso_pull_exthdrs(skb, ipv6h->nexthdr); if (likely(ops && ops->gso_send_check)) { - skb->h.raw = skb->data; + skb_reset_transport_header(skb); err = ops->gso_send_check(skb); } rcu_read_unlock(); @@ -137,14 +137,14 @@ static struct sk_buff *ipv6_gso_segment( if (unlikely(!pskb_may_pull(skb, sizeof(*ipv6h)))) goto out; - ipv6h = skb->nh.ipv6h; + ipv6h = ipv6_hdr(skb); __skb_pull(skb, sizeof(*ipv6h)); segs = ERR_PTR(-EPROTONOSUPPORT); rcu_read_lock(); ops = ipv6_gso_pull_exthdrs(skb, ipv6h->nexthdr); if (likely(ops && ops->gso_segment)) { - skb->h.raw = skb->data; + skb_reset_transport_header(skb); segs = ops->gso_segment(skb, features); } rcu_read_unlock(); @@ -153,7 +153,7 @@ static struct sk_buff *ipv6_gso_segment( goto out; for (skb = segs; skb; skb = skb->next) { - ipv6h = skb->nh.ipv6h; + ipv6h = ipv6_hdr(skb); ipv6h->payload_len = htons(skb->len - skb->mac_len - sizeof(*ipv6h)); } @@ -694,7 +694,7 @@ done: retv = ip6_ra_control(sk, val, NULL); break; case IPV6_MTU_DISCOVER: - if (val<0 || val>2) + if (val<0 || val>3) goto e_inval; np->pmtudisc = val; retv = 0; @@ -761,6 +761,7 @@ int ipv6_setsockopt(struct sock *sk, int return err; } +EXPORT_SYMBOL(ipv6_setsockopt); #ifdef CONFIG_COMPAT int compat_ipv6_setsockopt(struct sock *sk, int level, int optname, @@ -796,18 +797,37 @@ EXPORT_SYMBOL(compat_ipv6_setsockopt); #endif static int ipv6_getsockopt_sticky(struct sock *sk, struct ipv6_txoptions *opt, - char __user *optval, int len) + int optname, char __user *optval, int len) { struct ipv6_opt_hdr *hdr; - if (!opt || !opt->hopopt) + if (!opt) + return 0; + + switch(optname) { + case IPV6_HOPOPTS: + hdr = opt->hopopt; + break; + case IPV6_RTHDRDSTOPTS: + hdr = opt->dst0opt; + break; + case IPV6_RTHDR: + hdr = (struct ipv6_opt_hdr *)opt->srcrt; + break; + case IPV6_DSTOPTS: + hdr = opt->dst1opt; + break; + default: + return -EINVAL; /* should not happen */ + } + + if (!hdr) return 0; - hdr = opt->hopopt; len = min_t(unsigned int, len, ipv6_optlen(hdr)); - if (copy_to_user(optval, hdr, ipv6_optlen(hdr))) + if (copy_to_user(optval, hdr, len)); return -EFAULT; - return len; + return ipv6_optlen(hdr); } static int do_ipv6_getsockopt(struct sock *sk, int level, int optname, @@ -945,7 +965,7 @@ static int do_ipv6_getsockopt(struct soc lock_sock(sk); len = ipv6_getsockopt_sticky(sk, np->opt, - optval, len); + optname, optval, len); release_sock(sk); return put_user(len, optlen); } @@ -1066,6 +1086,8 @@ int ipv6_getsockopt(struct sock *sk, int return err; } +EXPORT_SYMBOL(ipv6_getsockopt); + #ifdef CONFIG_COMPAT int compat_ipv6_getsockopt(struct sock *sk, int level, int optname, char __user *optval, int __user *optlen) diff -puN net/ipv6/ipv6_syms.c~git-net /dev/null --- a/net/ipv6/ipv6_syms.c +++ /dev/null @@ -1,36 +0,0 @@ - -#include -#include -#include -#include -#include -#include - -EXPORT_SYMBOL(icmpv6_send); -EXPORT_SYMBOL(icmpv6_statistics); -EXPORT_SYMBOL(icmpv6_err_convert); -EXPORT_SYMBOL(ndisc_mc_map); -EXPORT_SYMBOL(register_inet6addr_notifier); -EXPORT_SYMBOL(unregister_inet6addr_notifier); -EXPORT_SYMBOL(ip6_route_output); -EXPORT_SYMBOL(ipv6_setsockopt); -EXPORT_SYMBOL(ipv6_getsockopt); -EXPORT_SYMBOL(inet6_register_protosw); -EXPORT_SYMBOL(inet6_unregister_protosw); -EXPORT_SYMBOL(inet6_add_protocol); -EXPORT_SYMBOL(inet6_del_protocol); -EXPORT_SYMBOL(ip6_xmit); -EXPORT_SYMBOL(inet6_release); -EXPORT_SYMBOL(inet6_bind); -EXPORT_SYMBOL(inet6_getname); -EXPORT_SYMBOL(inet6_ioctl); -EXPORT_SYMBOL(ipv6_get_saddr); -EXPORT_SYMBOL(ipv6_chk_addr); -EXPORT_SYMBOL(in6_dev_finish_destroy); -#ifdef CONFIG_XFRM -EXPORT_SYMBOL(xfrm6_rcv); -EXPORT_SYMBOL(xfrm6_input_addr); -EXPORT_SYMBOL(xfrm6_find_1stfragopt); -#endif -EXPORT_SYMBOL(rt6_lookup); -EXPORT_SYMBOL(ipv6_push_nfrag_opts); diff -puN net/ipv6/mcast.c~git-net net/ipv6/mcast.c --- a/net/ipv6/mcast.c~git-net +++ a/net/ipv6/mcast.c @@ -988,7 +988,7 @@ int ipv6_is_mld(struct sk_buff *skb, int if (!pskb_may_pull(skb, sizeof(struct icmp6hdr))) return 0; - pic = (struct icmp6hdr *)skb->h.raw; + pic = icmp6_hdr(skb); switch (pic->icmp6_type) { case ICMPV6_MGM_QUERY: @@ -1167,11 +1167,11 @@ int igmp6_event_query(struct sk_buff *sk return -EINVAL; /* compute payload length excluding extension headers */ - len = ntohs(skb->nh.ipv6h->payload_len) + sizeof(struct ipv6hdr); - len -= (char *)skb->h.raw - (char *)skb->nh.ipv6h; + len = ntohs(ipv6_hdr(skb)->payload_len) + sizeof(struct ipv6hdr); + len -= skb_network_header_len(skb); /* Drop queries with not link local source */ - if (!(ipv6_addr_type(&skb->nh.ipv6h->saddr)&IPV6_ADDR_LINKLOCAL)) + if (!(ipv6_addr_type(&ipv6_hdr(skb)->saddr) & IPV6_ADDR_LINKLOCAL)) return -EINVAL; idev = in6_dev_get(skb->dev); @@ -1179,7 +1179,7 @@ int igmp6_event_query(struct sk_buff *sk if (idev == NULL) return 0; - hdr = (struct icmp6hdr *) skb->h.raw; + hdr = icmp6_hdr(skb); group = (struct in6_addr *) (hdr + 1); group_type = ipv6_addr_type(group); @@ -1212,7 +1212,7 @@ int igmp6_event_query(struct sk_buff *sk in6_dev_put(idev); return -EINVAL; } - mlh2 = (struct mld2_query *) skb->h.raw; + mlh2 = (struct mld2_query *)skb_transport_header(skb); max_delay = (MLDV2_MRC(ntohs(mlh2->mrc))*HZ)/1000; if (!max_delay) max_delay = 1; @@ -1235,7 +1235,7 @@ int igmp6_event_query(struct sk_buff *sk in6_dev_put(idev); return -EINVAL; } - mlh2 = (struct mld2_query *) skb->h.raw; + mlh2 = (struct mld2_query *)skb_transport_header(skb); mark = 1; } } else { @@ -1300,10 +1300,10 @@ int igmp6_event_report(struct sk_buff *s if (!pskb_may_pull(skb, sizeof(struct in6_addr))) return -EINVAL; - hdr = (struct icmp6hdr*) skb->h.raw; + hdr = icmp6_hdr(skb); /* Drop reports with not link local source */ - addr_type = ipv6_addr_type(&skb->nh.ipv6h->saddr); + addr_type = ipv6_addr_type(&ipv6_hdr(skb)->saddr); if (addr_type != IPV6_ADDR_ANY && !(addr_type&IPV6_ADDR_LINKLOCAL)) return -EINVAL; @@ -1411,7 +1411,7 @@ static struct sk_buff *mld_newpack(struc skb_reserve(skb, LL_RESERVED_SPACE(dev)); - if (ipv6_get_lladdr(dev, &addr_buf)) { + if (ipv6_get_lladdr(dev, &addr_buf, IFA_F_TENTATIVE)) { /* : * use unspecified address as the source address * when a valid link-local address is not available. @@ -1423,8 +1423,9 @@ static struct sk_buff *mld_newpack(struc memcpy(skb_put(skb, sizeof(ra)), ra, sizeof(ra)); - pmr =(struct mld2_report *)skb_put(skb, sizeof(*pmr)); - skb->h.raw = (unsigned char *)pmr; + skb_set_transport_header(skb, skb_tail_pointer(skb) - skb->data); + skb_put(skb, sizeof(*pmr)); + pmr = (struct mld2_report *)skb_transport_header(skb); pmr->type = ICMPV6_MLD2_REPORT; pmr->resv1 = 0; pmr->csum = 0; @@ -1441,7 +1442,7 @@ static inline int mld_dev_queue_xmit2(st unsigned char ha[MAX_ADDR_LEN]; int err; - ndisc_mc_map(&skb->nh.ipv6h->daddr, ha, dev, 1); + ndisc_mc_map(&ipv6_hdr(skb)->daddr, ha, dev, 1); err = dev->hard_header(skb, dev, ETH_P_IPV6, ha, NULL, skb->len); if (err < 0) { kfree_skb(skb); @@ -1459,20 +1460,21 @@ static inline int mld_dev_queue_xmit(str static void mld_sendpack(struct sk_buff *skb) { - struct ipv6hdr *pip6 = skb->nh.ipv6h; - struct mld2_report *pmr = (struct mld2_report *)skb->h.raw; + struct ipv6hdr *pip6 = ipv6_hdr(skb); + struct mld2_report *pmr = + (struct mld2_report *)skb_transport_header(skb); int payload_len, mldlen; struct inet6_dev *idev = in6_dev_get(skb->dev); int err; IP6_INC_STATS(idev, IPSTATS_MIB_OUTREQUESTS); - payload_len = skb->tail - (unsigned char *)skb->nh.ipv6h - - sizeof(struct ipv6hdr); - mldlen = skb->tail - skb->h.raw; + payload_len = (skb->tail - skb->network_header) - sizeof(*pip6); + mldlen = skb->tail - skb->transport_header; pip6->payload_len = htons(payload_len); pmr->csum = csum_ipv6_magic(&pip6->saddr, &pip6->daddr, mldlen, - IPPROTO_ICMPV6, csum_partial(skb->h.raw, mldlen, 0)); + IPPROTO_ICMPV6, csum_partial(skb_transport_header(skb), + mldlen, 0)); err = NF_HOOK(PF_INET6, NF_IP6_LOCAL_OUT, skb, NULL, skb->dev, mld_dev_queue_xmit); if (!err) { @@ -1506,7 +1508,7 @@ static struct sk_buff *add_grhead(struct pgr->grec_auxwords = 0; pgr->grec_nsrcs = 0; pgr->grec_mca = pmc->mca_addr; /* structure copy */ - pmr = (struct mld2_report *)skb->h.raw; + pmr = (struct mld2_report *)skb_transport_header(skb); pmr->ngrec = htons(ntohs(pmr->ngrec)+1); *ppgr = pgr; return skb; @@ -1539,7 +1541,7 @@ static struct sk_buff *add_grec(struct s if (!*psf_list) goto empty_source; - pmr = skb ? (struct mld2_report *)skb->h.raw : NULL; + pmr = skb ? (struct mld2_report *)skb_transport_header(skb) : NULL; /* EX and TO_EX get a fresh packet, if needed */ if (truncate) { @@ -1791,7 +1793,7 @@ static void igmp6_send(struct in6_addr * skb_reserve(skb, LL_RESERVED_SPACE(dev)); - if (ipv6_get_lladdr(dev, &addr_buf)) { + if (ipv6_get_lladdr(dev, &addr_buf, IFA_F_TENTATIVE)) { /* : * use unspecified address as the source address * when a valid link-local address is not available. diff -puN net/ipv6/mip6.c~git-net net/ipv6/mip6.c --- a/net/ipv6/mip6.c~git-net +++ a/net/ipv6/mip6.c @@ -90,23 +90,26 @@ int mip6_mh_filter(struct sock *sk, stru { struct ip6_mh *mh; - if (!pskb_may_pull(skb, (skb->h.raw - skb->data) + 8) || - !pskb_may_pull(skb, (skb->h.raw - skb->data) + ((skb->h.raw[1] + 1) << 3))) + if (!pskb_may_pull(skb, (skb_transport_offset(skb)) + 8) || + !pskb_may_pull(skb, (skb_transport_offset(skb) + + ((skb_transport_header(skb)[1] + 1) << 3)))) return -1; - mh = (struct ip6_mh *)skb->h.raw; + mh = (struct ip6_mh *)skb_transport_header(skb); if (mh->ip6mh_hdrlen < mip6_mh_len(mh->ip6mh_type)) { LIMIT_NETDEBUG(KERN_DEBUG "mip6: MH message too short: %d vs >=%d\n", mh->ip6mh_hdrlen, mip6_mh_len(mh->ip6mh_type)); - mip6_param_prob(skb, 0, (&mh->ip6mh_hdrlen) - skb->nh.raw); + mip6_param_prob(skb, 0, ((&mh->ip6mh_hdrlen) - + skb_network_header(skb))); return -1; } if (mh->ip6mh_proto != IPPROTO_NONE) { LIMIT_NETDEBUG(KERN_DEBUG "mip6: MH invalid payload proto = %d\n", mh->ip6mh_proto); - mip6_param_prob(skb, 0, (&mh->ip6mh_proto) - skb->nh.raw); + mip6_param_prob(skb, 0, ((&mh->ip6mh_proto) - + skb_network_header(skb))); return -1; } @@ -127,7 +130,7 @@ static struct mip6_report_rate_limiter m static int mip6_destopt_input(struct xfrm_state *x, struct sk_buff *skb) { - struct ipv6hdr *iph = skb->nh.ipv6h; + struct ipv6hdr *iph = ipv6_hdr(skb); struct ipv6_destopt_hdr *destopt = (struct ipv6_destopt_hdr *)skb->data; if (!ipv6_addr_equal(&iph->saddr, (struct in6_addr *)x->coaddr) && @@ -152,10 +155,10 @@ static int mip6_destopt_output(struct xf iph = (struct ipv6hdr *)skb->data; iph->payload_len = htons(skb->len - sizeof(*iph)); - nexthdr = *skb->nh.raw; - *skb->nh.raw = IPPROTO_DSTOPTS; + nexthdr = *skb_network_header(skb); + *skb_network_header(skb) = IPPROTO_DSTOPTS; - dstopt = (struct ipv6_destopt_hdr *)skb->h.raw; + dstopt = (struct ipv6_destopt_hdr *)skb_transport_header(skb); dstopt->nexthdr = nexthdr; hao = mip6_padn((char *)(dstopt + 1), @@ -215,21 +218,22 @@ static int mip6_destopt_reject(struct xf if (likely(opt->dsthao)) { offset = ipv6_find_tlv(skb, opt->dsthao, IPV6_TLV_HAO); if (likely(offset >= 0)) - hao = (struct ipv6_destopt_hao *)(skb->nh.raw + offset); + hao = (struct ipv6_destopt_hao *) + (skb_network_header(skb) + offset); } skb_get_timestamp(skb, &stamp); - if (!mip6_report_rl_allow(&stamp, &skb->nh.ipv6h->daddr, - hao ? &hao->addr : &skb->nh.ipv6h->saddr, + if (!mip6_report_rl_allow(&stamp, &ipv6_hdr(skb)->daddr, + hao ? &hao->addr : &ipv6_hdr(skb)->saddr, opt->iif)) goto out; memset(&sel, 0, sizeof(sel)); - memcpy(&sel.daddr, (xfrm_address_t *)&skb->nh.ipv6h->daddr, + memcpy(&sel.daddr, (xfrm_address_t *)&ipv6_hdr(skb)->daddr, sizeof(sel.daddr)); sel.prefixlen_d = 128; - memcpy(&sel.saddr, (xfrm_address_t *)&skb->nh.ipv6h->saddr, + memcpy(&sel.saddr, (xfrm_address_t *)&ipv6_hdr(skb)->saddr, sizeof(sel.saddr)); sel.prefixlen_s = 128; sel.family = AF_INET6; @@ -253,11 +257,13 @@ static int mip6_destopt_offset(struct xf u8 **nexthdr) { u16 offset = sizeof(struct ipv6hdr); - struct ipv6_opt_hdr *exthdr = (struct ipv6_opt_hdr*)(skb->nh.ipv6h + 1); - unsigned int packet_len = skb->tail - skb->nh.raw; + struct ipv6_opt_hdr *exthdr = + (struct ipv6_opt_hdr *)(ipv6_hdr(skb) + 1); + const unsigned char *nh = skb_network_header(skb); + unsigned int packet_len = skb->tail - skb->network_header; int found_rhdr = 0; - *nexthdr = &skb->nh.ipv6h->nexthdr; + *nexthdr = &ipv6_hdr(skb)->nexthdr; while (offset + 1 <= packet_len) { @@ -288,7 +294,7 @@ static int mip6_destopt_offset(struct xf offset += ipv6_optlen(exthdr); *nexthdr = &exthdr->nexthdr; - exthdr = (struct ipv6_opt_hdr*)(skb->nh.raw + offset); + exthdr = (struct ipv6_opt_hdr *)(nh + offset); } return offset; @@ -361,10 +367,10 @@ static int mip6_rthdr_output(struct xfrm iph = (struct ipv6hdr *)skb->data; iph->payload_len = htons(skb->len - sizeof(*iph)); - nexthdr = *skb->nh.raw; - *skb->nh.raw = IPPROTO_ROUTING; + nexthdr = *skb_network_header(skb); + *skb_network_header(skb) = IPPROTO_ROUTING; - rt2 = (struct rt2_hdr *)skb->h.raw; + rt2 = (struct rt2_hdr *)skb_transport_header(skb); rt2->rt_hdr.nexthdr = nexthdr; rt2->rt_hdr.hdrlen = (x->props.header_len >> 3) - 1; rt2->rt_hdr.type = IPV6_SRCRT_TYPE_2; @@ -383,11 +389,13 @@ static int mip6_rthdr_offset(struct xfrm u8 **nexthdr) { u16 offset = sizeof(struct ipv6hdr); - struct ipv6_opt_hdr *exthdr = (struct ipv6_opt_hdr*)(skb->nh.ipv6h + 1); - unsigned int packet_len = skb->tail - skb->nh.raw; + struct ipv6_opt_hdr *exthdr = + (struct ipv6_opt_hdr *)(ipv6_hdr(skb) + 1); + const unsigned char *nh = skb_network_header(skb); + unsigned int packet_len = skb->tail - skb->network_header; int found_rhdr = 0; - *nexthdr = &skb->nh.ipv6h->nexthdr; + *nexthdr = &ipv6_hdr(skb)->nexthdr; while (offset + 1 <= packet_len) { @@ -397,7 +405,7 @@ static int mip6_rthdr_offset(struct xfrm case NEXTHDR_ROUTING: if (offset + 3 <= packet_len) { struct ipv6_rt_hdr *rt; - rt = (struct ipv6_rt_hdr *)(skb->nh.raw + offset); + rt = (struct ipv6_rt_hdr *)(nh + offset); if (rt->type != 0) return offset; } @@ -417,7 +425,7 @@ static int mip6_rthdr_offset(struct xfrm offset += ipv6_optlen(exthdr); *nexthdr = &exthdr->nexthdr; - exthdr = (struct ipv6_opt_hdr*)(skb->nh.raw + offset); + exthdr = (struct ipv6_opt_hdr *)(nh + offset); } return offset; diff -puN net/ipv6/ndisc.c~git-net net/ipv6/ndisc.c --- a/net/ipv6/ndisc.c~git-net +++ a/net/ipv6/ndisc.c @@ -319,6 +319,8 @@ int ndisc_mc_map(struct in6_addr *addr, return -EINVAL; } +EXPORT_SYMBOL(ndisc_mc_map); + static u32 ndisc_hash(const void *pkey, const struct net_device *dev) { const u32 *p32 = pkey; @@ -447,6 +449,8 @@ static void ndisc_send_na(struct net_dev ifp = ipv6_get_ifaddr(solicited_addr, dev, 1); if (ifp) { src_addr = solicited_addr; + if (ifp->flags & IFA_F_OPTIMISTIC) + override = 0; in6_ifa_put(ifp); } else { if (ipv6_dev_get_saddr(dev, daddr, &tmpaddr)) @@ -488,8 +492,9 @@ static void ndisc_send_na(struct net_dev skb_reserve(skb, LL_RESERVED_SPACE(dev)); ip6_nd_hdr(sk, skb, dev, src_addr, daddr, IPPROTO_ICMPV6, len); - msg = (struct nd_msg *)skb_put(skb, len); - skb->h.raw = (unsigned char*)msg; + skb->transport_header = skb->tail; + skb_put(skb, len); + msg = (struct nd_msg *)skb_transport_header(skb); msg->icmph.icmp6_type = NDISC_NEIGHBOUR_ADVERTISEMENT; msg->icmph.icmp6_code = 0; @@ -542,7 +547,8 @@ void ndisc_send_ns(struct net_device *de int send_llinfo; if (saddr == NULL) { - if (ipv6_get_lladdr(dev, &addr_buf)) + if (ipv6_get_lladdr(dev, &addr_buf, + (IFA_F_TENTATIVE|IFA_F_OPTIMISTIC))) return; saddr = &addr_buf; } @@ -578,8 +584,9 @@ void ndisc_send_ns(struct net_device *de skb_reserve(skb, LL_RESERVED_SPACE(dev)); ip6_nd_hdr(sk, skb, dev, saddr, daddr, IPPROTO_ICMPV6, len); - msg = (struct nd_msg *)skb_put(skb, len); - skb->h.raw = (unsigned char*)msg; + skb->transport_header = skb->tail; + skb_put(skb, len); + msg = (struct nd_msg *)skb_transport_header(skb); msg->icmph.icmp6_type = NDISC_NEIGHBOUR_SOLICITATION; msg->icmph.icmp6_code = 0; msg->icmph.icmp6_cksum = 0; @@ -593,7 +600,7 @@ void ndisc_send_ns(struct net_device *de dev->addr_len, dev->type); /* checksum */ - msg->icmph.icmp6_cksum = csum_ipv6_magic(&skb->nh.ipv6h->saddr, + msg->icmph.icmp6_cksum = csum_ipv6_magic(&ipv6_hdr(skb)->saddr, daddr, len, IPPROTO_ICMPV6, csum_partial((__u8 *) msg, @@ -622,9 +629,32 @@ void ndisc_send_rs(struct net_device *de struct sk_buff *skb; struct icmp6hdr *hdr; __u8 * opt; + int send_sllao = dev->addr_len; int len; int err; + +#ifdef CONFIG_IPV6_OPTIMISTIC_DAD + /* + * According to section 2.2 of RFC 4429, we must not + * send router solicitations with a sllao from + * optimistic addresses, but we may send the solicitation + * if we don't include the sllao. So here we check + * if our address is optimistic, and if so, we + * supress the inclusion of the sllao. + */ + if (send_sllao) { + struct inet6_ifaddr *ifp = ipv6_get_ifaddr(saddr, dev, 1); + if (ifp) { + if (ifp->flags & IFA_F_OPTIMISTIC) { + send_sllao = 0; + } + in6_ifa_put(ifp); + } else { + send_sllao = 0; + } + } +#endif ndisc_flow_init(&fl, NDISC_ROUTER_SOLICITATION, saddr, daddr, dev->ifindex); @@ -637,7 +667,7 @@ void ndisc_send_rs(struct net_device *de return; len = sizeof(struct icmp6hdr); - if (dev->addr_len) + if (send_sllao) len += ndisc_opt_addr_space(dev); skb = sock_alloc_send_skb(sk, @@ -655,8 +685,9 @@ void ndisc_send_rs(struct net_device *de skb_reserve(skb, LL_RESERVED_SPACE(dev)); ip6_nd_hdr(sk, skb, dev, saddr, daddr, IPPROTO_ICMPV6, len); - hdr = (struct icmp6hdr *)skb_put(skb, len); - skb->h.raw = (unsigned char*)hdr; + skb->transport_header = skb->tail; + skb_put(skb, len); + hdr = icmp6_hdr(skb); hdr->icmp6_type = NDISC_ROUTER_SOLICITATION; hdr->icmp6_code = 0; hdr->icmp6_cksum = 0; @@ -664,12 +695,12 @@ void ndisc_send_rs(struct net_device *de opt = (u8*) (hdr + 1); - if (dev->addr_len) + if (send_sllao) ndisc_fill_addr_option(opt, ND_OPT_SOURCE_LL_ADDR, dev->dev_addr, dev->addr_len, dev->type); /* checksum */ - hdr->icmp6_cksum = csum_ipv6_magic(&skb->nh.ipv6h->saddr, daddr, len, + hdr->icmp6_cksum = csum_ipv6_magic(&ipv6_hdr(skb)->saddr, daddr, len, IPPROTO_ICMPV6, csum_partial((__u8 *) hdr, len, 0)); @@ -708,8 +739,8 @@ static void ndisc_solicit(struct neighbo struct in6_addr *target = (struct in6_addr *)&neigh->primary_key; int probes = atomic_read(&neigh->probes); - if (skb && ipv6_chk_addr(&skb->nh.ipv6h->saddr, dev, 1)) - saddr = &skb->nh.ipv6h->saddr; + if (skb && ipv6_chk_addr(&ipv6_hdr(skb)->saddr, dev, 1)) + saddr = &ipv6_hdr(skb)->saddr; if ((probes -= neigh->parms->ucast_probes) < 0) { if (!(neigh->nud_state & NUD_VALID)) { @@ -732,11 +763,12 @@ static void ndisc_solicit(struct neighbo static void ndisc_recv_ns(struct sk_buff *skb) { - struct nd_msg *msg = (struct nd_msg *)skb->h.raw; - struct in6_addr *saddr = &skb->nh.ipv6h->saddr; - struct in6_addr *daddr = &skb->nh.ipv6h->daddr; + struct nd_msg *msg = (struct nd_msg *)skb_transport_header(skb); + struct in6_addr *saddr = &ipv6_hdr(skb)->saddr; + struct in6_addr *daddr = &ipv6_hdr(skb)->daddr; u8 *lladdr = NULL; - u32 ndoptlen = skb->tail - msg->opt; + u32 ndoptlen = skb->tail - (skb->transport_header + + offsetof(struct nd_msg, opt)); struct ndisc_options ndopts; struct net_device *dev = skb->dev; struct inet6_ifaddr *ifp; @@ -796,28 +828,40 @@ static void ndisc_recv_ns(struct sk_buff inc = ipv6_addr_is_multicast(daddr); if ((ifp = ipv6_get_ifaddr(&msg->target, dev, 1)) != NULL) { - if (ifp->flags & IFA_F_TENTATIVE) { - /* Address is tentative. If the source - is unspecified address, it is someone - does DAD, otherwise we ignore solicitations - until DAD timer expires. - */ - if (!dad) + + if (ifp->flags & (IFA_F_TENTATIVE|IFA_F_OPTIMISTIC)) { + if (dad) { + if (dev->type == ARPHRD_IEEE802_TR) { + const unsigned char *sadr; + sadr = skb_mac_header(skb); + if (((sadr[8] ^ dev->dev_addr[0]) & 0x7f) == 0 && + sadr[9] == dev->dev_addr[1] && + sadr[10] == dev->dev_addr[2] && + sadr[11] == dev->dev_addr[3] && + sadr[12] == dev->dev_addr[4] && + sadr[13] == dev->dev_addr[5]) { + /* looped-back to us */ + goto out; + } + } + + /* + * We are colliding with another node + * who is doing DAD + * so fail our DAD process + */ + addrconf_dad_failure(ifp); goto out; - if (dev->type == ARPHRD_IEEE802_TR) { - unsigned char *sadr = skb->mac.raw; - if (((sadr[8] ^ dev->dev_addr[0]) & 0x7f) == 0 && - sadr[9] == dev->dev_addr[1] && - sadr[10] == dev->dev_addr[2] && - sadr[11] == dev->dev_addr[3] && - sadr[12] == dev->dev_addr[4] && - sadr[13] == dev->dev_addr[5]) { - /* looped-back to us */ + } else { + /* + * This is not a dad solicitation. + * If we are an optimistic node, + * we should respond. + * Otherwise, we should ignore it. + */ + if (!(ifp->flags & IFA_F_OPTIMISTIC)) goto out; - } } - addrconf_dad_failure(ifp); - return; } idev = ifp->idev; @@ -898,11 +942,12 @@ out: static void ndisc_recv_na(struct sk_buff *skb) { - struct nd_msg *msg = (struct nd_msg *)skb->h.raw; - struct in6_addr *saddr = &skb->nh.ipv6h->saddr; - struct in6_addr *daddr = &skb->nh.ipv6h->daddr; + struct nd_msg *msg = (struct nd_msg *)skb_transport_header(skb); + struct in6_addr *saddr = &ipv6_hdr(skb)->saddr; + struct in6_addr *daddr = &ipv6_hdr(skb)->daddr; u8 *lladdr = NULL; - u32 ndoptlen = skb->tail - msg->opt; + u32 ndoptlen = skb->tail - (skb->transport_header + + offsetof(struct nd_msg, opt)); struct ndisc_options ndopts; struct net_device *dev = skb->dev; struct inet6_ifaddr *ifp; @@ -1000,11 +1045,11 @@ out: static void ndisc_recv_rs(struct sk_buff *skb) { - struct rs_msg *rs_msg = (struct rs_msg *) skb->h.raw; + struct rs_msg *rs_msg = (struct rs_msg *)skb_transport_header(skb); unsigned long ndoptlen = skb->len - sizeof(*rs_msg); struct neighbour *neigh; struct inet6_dev *idev; - struct in6_addr *saddr = &skb->nh.ipv6h->saddr; + struct in6_addr *saddr = &ipv6_hdr(skb)->saddr; struct ndisc_options ndopts; u8 *lladdr = NULL; @@ -1057,7 +1102,7 @@ out: static void ndisc_router_discovery(struct sk_buff *skb) { - struct ra_msg *ra_msg = (struct ra_msg *) skb->h.raw; + struct ra_msg *ra_msg = (struct ra_msg *)skb_transport_header(skb); struct neighbour *neigh = NULL; struct inet6_dev *in6_dev; struct rt6_info *rt = NULL; @@ -1068,9 +1113,9 @@ static void ndisc_router_discovery(struc __u8 * opt = (__u8 *)(ra_msg + 1); - optlen = (skb->tail - skb->h.raw) - sizeof(struct ra_msg); + optlen = (skb->tail - skb->transport_header) - sizeof(struct ra_msg); - if (!(ipv6_addr_type(&skb->nh.ipv6h->saddr) & IPV6_ADDR_LINKLOCAL)) { + if (!(ipv6_addr_type(&ipv6_hdr(skb)->saddr) & IPV6_ADDR_LINKLOCAL)) { ND_PRINTK2(KERN_WARNING "ICMPv6 RA: source address is not link-local.\n"); return; @@ -1136,7 +1181,7 @@ static void ndisc_router_discovery(struc pref = ICMPV6_ROUTER_PREF_MEDIUM; #endif - rt = rt6_get_dflt_router(&skb->nh.ipv6h->saddr, skb->dev); + rt = rt6_get_dflt_router(&ipv6_hdr(skb)->saddr, skb->dev); if (rt) neigh = rt->rt6i_nexthop; @@ -1151,7 +1196,7 @@ static void ndisc_router_discovery(struc ND_PRINTK3(KERN_DEBUG "ICMPv6 RA: adding default router.\n"); - rt = rt6_add_dflt_router(&skb->nh.ipv6h->saddr, skb->dev, pref); + rt = rt6_add_dflt_router(&ipv6_hdr(skb)->saddr, skb->dev, pref); if (rt == NULL) { ND_PRINTK0(KERN_ERR "ICMPv6 RA: %s() failed to add default route.\n", @@ -1223,7 +1268,7 @@ skip_defrtr: */ if (!neigh) - neigh = __neigh_lookup(&nd_tbl, &skb->nh.ipv6h->saddr, + neigh = __neigh_lookup(&nd_tbl, &ipv6_hdr(skb)->saddr, skb->dev, 1); if (neigh) { u8 *lladdr = NULL; @@ -1252,7 +1297,7 @@ skip_defrtr: if (((struct route_info *)p)->prefix_len > in6_dev->cnf.accept_ra_rt_info_max_plen) continue; rt6_route_rcv(skb->dev, (u8*)p, (p->nd_opt_len) << 3, - &skb->nh.ipv6h->saddr); + &ipv6_hdr(skb)->saddr); } } #endif @@ -1311,13 +1356,13 @@ static void ndisc_redirect_rcv(struct sk int optlen; u8 *lladdr = NULL; - if (!(ipv6_addr_type(&skb->nh.ipv6h->saddr) & IPV6_ADDR_LINKLOCAL)) { + if (!(ipv6_addr_type(&ipv6_hdr(skb)->saddr) & IPV6_ADDR_LINKLOCAL)) { ND_PRINTK2(KERN_WARNING "ICMPv6 Redirect: source address is not link-local.\n"); return; } - optlen = skb->tail - skb->h.raw; + optlen = skb->tail - skb->transport_header; optlen -= sizeof(struct icmp6hdr) + 2 * sizeof(struct in6_addr); if (optlen < 0) { @@ -1326,7 +1371,7 @@ static void ndisc_redirect_rcv(struct sk return; } - icmph = (struct icmp6hdr *) skb->h.raw; + icmph = icmp6_hdr(skb); target = (struct in6_addr *) (icmph + 1); dest = target + 1; @@ -1376,8 +1421,8 @@ static void ndisc_redirect_rcv(struct sk neigh = __neigh_lookup(&nd_tbl, target, skb->dev, 1); if (neigh) { - rt6_redirect(dest, &skb->nh.ipv6h->daddr, - &skb->nh.ipv6h->saddr, neigh, lladdr, + rt6_redirect(dest, &ipv6_hdr(skb)->daddr, + &ipv6_hdr(skb)->saddr, neigh, lladdr, on_link); neigh_release(neigh); } @@ -1406,21 +1451,21 @@ void ndisc_send_redirect(struct sk_buff dev = skb->dev; - if (ipv6_get_lladdr(dev, &saddr_buf)) { + if (ipv6_get_lladdr(dev, &saddr_buf, IFA_F_TENTATIVE)) { ND_PRINTK2(KERN_WARNING "ICMPv6 Redirect: no link-local address on %s\n", dev->name); return; } - if (!ipv6_addr_equal(&skb->nh.ipv6h->daddr, target) && + if (!ipv6_addr_equal(&ipv6_hdr(skb)->daddr, target) && !(ipv6_addr_type(target) & IPV6_ADDR_LINKLOCAL)) { ND_PRINTK2(KERN_WARNING "ICMPv6 Redirect: target address is not link-local.\n"); return; } - ndisc_flow_init(&fl, NDISC_REDIRECT, &saddr_buf, &skb->nh.ipv6h->saddr, + ndisc_flow_init(&fl, NDISC_REDIRECT, &saddr_buf, &ipv6_hdr(skb)->saddr, dev->ifindex); dst = ip6_route_output(NULL, &fl); @@ -1475,11 +1520,12 @@ void ndisc_send_redirect(struct sk_buff hlen = 0; skb_reserve(buff, LL_RESERVED_SPACE(dev)); - ip6_nd_hdr(sk, buff, dev, &saddr_buf, &skb->nh.ipv6h->saddr, + ip6_nd_hdr(sk, buff, dev, &saddr_buf, &ipv6_hdr(skb)->saddr, IPPROTO_ICMPV6, len); - icmph = (struct icmp6hdr *)skb_put(buff, len); - buff->h.raw = (unsigned char*)icmph; + skb_set_transport_header(buff, skb_tail_pointer(buff) - buff->data); + skb_put(buff, len); + icmph = icmp6_hdr(buff); memset(icmph, 0, sizeof(struct icmp6hdr)); icmph->icmp6_type = NDISC_REDIRECT; @@ -1491,7 +1537,7 @@ void ndisc_send_redirect(struct sk_buff addrp = (struct in6_addr *)(icmph + 1); ipv6_addr_copy(addrp, target); addrp++; - ipv6_addr_copy(addrp, &skb->nh.ipv6h->daddr); + ipv6_addr_copy(addrp, &ipv6_hdr(skb)->daddr); opt = (u8*) (addrp + 1); @@ -1512,9 +1558,9 @@ void ndisc_send_redirect(struct sk_buff *(opt++) = (rd_len >> 3); opt += 6; - memcpy(opt, skb->nh.ipv6h, rd_len - 8); + memcpy(opt, ipv6_hdr(skb), rd_len - 8); - icmph->icmp6_cksum = csum_ipv6_magic(&saddr_buf, &skb->nh.ipv6h->saddr, + icmph->icmp6_cksum = csum_ipv6_magic(&saddr_buf, &ipv6_hdr(skb)->saddr, len, IPPROTO_ICMPV6, csum_partial((u8 *) icmph, len, 0)); @@ -1544,14 +1590,14 @@ int ndisc_rcv(struct sk_buff *skb) if (!pskb_may_pull(skb, skb->len)) return 0; - msg = (struct nd_msg *) skb->h.raw; + msg = (struct nd_msg *)skb_transport_header(skb); - __skb_push(skb, skb->data-skb->h.raw); + __skb_push(skb, skb->data - skb_transport_header(skb)); - if (skb->nh.ipv6h->hop_limit != 255) { + if (ipv6_hdr(skb)->hop_limit != 255) { ND_PRINTK2(KERN_WARNING "ICMPv6 NDISC: invalid hop-limit: %d\n", - skb->nh.ipv6h->hop_limit); + ipv6_hdr(skb)->hop_limit); return 0; } @@ -1584,7 +1630,7 @@ int ndisc_rcv(struct sk_buff *skb) case NDISC_REDIRECT: ndisc_redirect_rcv(skb); break; - }; + } return 0; } diff -puN net/ipv6/netfilter.c~git-net net/ipv6/netfilter.c --- a/net/ipv6/netfilter.c~git-net +++ a/net/ipv6/netfilter.c @@ -11,7 +11,7 @@ int ip6_route_me_harder(struct sk_buff *skb) { - struct ipv6hdr *iph = skb->nh.ipv6h; + struct ipv6hdr *iph = ipv6_hdr(skb); struct dst_entry *dst; struct flowi fl = { .oif = skb->sk ? skb->sk->sk_bound_dev_if : 0, @@ -61,7 +61,7 @@ static void nf_ip6_saveroute(const struc struct ip6_rt_info *rt_info = nf_info_reroute(info); if (info->hook == NF_IP6_LOCAL_OUT) { - struct ipv6hdr *iph = skb->nh.ipv6h; + struct ipv6hdr *iph = ipv6_hdr(skb); rt_info->daddr = iph->daddr; rt_info->saddr = iph->saddr; @@ -73,7 +73,7 @@ static int nf_ip6_reroute(struct sk_buff struct ip6_rt_info *rt_info = nf_info_reroute(info); if (info->hook == NF_IP6_LOCAL_OUT) { - struct ipv6hdr *iph = (*pskb)->nh.ipv6h; + struct ipv6hdr *iph = ipv6_hdr(*pskb); if (!ipv6_addr_equal(&iph->daddr, &rt_info->daddr) || !ipv6_addr_equal(&iph->saddr, &rt_info->saddr)) return ip6_route_me_harder(*pskb); @@ -84,7 +84,7 @@ static int nf_ip6_reroute(struct sk_buff __sum16 nf_ip6_checksum(struct sk_buff *skb, unsigned int hook, unsigned int dataoff, u_int8_t protocol) { - struct ipv6hdr *ip6h = skb->nh.ipv6h; + struct ipv6hdr *ip6h = ipv6_hdr(skb); __sum16 csum = 0; switch (skb->ip_summed) { diff -puN net/ipv6/netfilter/ip6_queue.c~git-net net/ipv6/netfilter/ip6_queue.c --- a/net/ipv6/netfilter/ip6_queue.c~git-net +++ a/net/ipv6/netfilter/ip6_queue.c @@ -11,18 +11,6 @@ * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as * published by the Free Software Foundation. - * - * 2001-11-06: First try. Working with ip_queue.c for IPv4 and trying - * to adapt it to IPv6 - * HEAVILY based in ipqueue.c by James Morris. It's just - * a little modified version of it, so he's nearly the - * real coder of this. - * Few changes needed, mainly the hard_routing code and - * the netlink socket protocol (we're NETLINK_IP6_FW). - * 2002-06-25: Code cleanup. [JM: ported cleanup over from ip_queue.c] - * 2005-02-04: Added /proc counter for dropped packets; fixed so - * packets aren't delivered to user space if they're going - * to be dropped. */ #include #include @@ -189,12 +177,13 @@ ipq_flush(int verdict) static struct sk_buff * ipq_build_packet_message(struct ipq_queue_entry *entry, int *errp) { - unsigned char *old_tail; + sk_buff_data_t old_tail; size_t size = 0; size_t data_len = 0; struct sk_buff *skb; struct ipq_packet_msg *pmsg; struct nlmsghdr *nlh; + struct timeval tv; read_lock_bh(&queue_lock); @@ -232,15 +221,16 @@ ipq_build_packet_message(struct ipq_queu if (!skb) goto nlmsg_failure; - old_tail= skb->tail; + old_tail = skb->tail; nlh = NLMSG_PUT(skb, 0, 0, IPQM_PACKET, size - sizeof(*nlh)); pmsg = NLMSG_DATA(nlh); memset(pmsg, 0, sizeof(*pmsg)); pmsg->packet_id = (unsigned long )entry; pmsg->data_len = data_len; - pmsg->timestamp_sec = entry->skb->tstamp.off_sec; - pmsg->timestamp_usec = entry->skb->tstamp.off_usec; + tv = ktime_to_timeval(entry->skb->tstamp); + pmsg->timestamp_sec = tv.tv_sec; + pmsg->timestamp_usec = tv.tv_usec; pmsg->mark = entry->skb->mark; pmsg->hook = entry->info->hook; pmsg->hw_protocol = entry->skb->protocol; @@ -376,7 +366,7 @@ ipq_mangle_ipv6(ipq_verdict_msg_t *v, st } if (!skb_make_writable(&e->skb, v->data_len)) return -ENOMEM; - memcpy(e->skb->data, v->payload, v->data_len); + skb_copy_to_linear_data(e->skb, v->payload, v->data_len); e->skb->ip_summed = CHECKSUM_NONE; return 0; @@ -485,7 +475,7 @@ ipq_rcv_skb(struct sk_buff *skb) if (skblen < sizeof(*nlh)) return; - nlh = (struct nlmsghdr *)skb->data; + nlh = nlmsg_hdr(skb); nlmsglen = nlh->nlmsg_len; if (nlmsglen < sizeof(*nlh) || skblen < nlmsglen) return; @@ -667,7 +657,7 @@ static int __init ip6_queue_init(void) struct proc_dir_entry *proc; netlink_register_notifier(&ipq_nl_notifier); - ipqnl = netlink_kernel_create(NETLINK_IP6_FW, 0, ipq_rcv_sk, + ipqnl = netlink_kernel_create(NETLINK_IP6_FW, 0, ipq_rcv_sk, NULL, THIS_MODULE); if (ipqnl == NULL) { printk(KERN_ERR "ip6_queue: failed to create netlink socket\n"); diff -puN net/ipv6/netfilter/ip6_tables.c~git-net net/ipv6/netfilter/ip6_tables.c --- a/net/ipv6/netfilter/ip6_tables.c~git-net +++ a/net/ipv6/netfilter/ip6_tables.c @@ -7,15 +7,6 @@ * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as * published by the Free Software Foundation. - * - * 19 Jan 2002 Harald Welte - * - increase module usage count as soon as we have rules inside - * a table - * 06 Jun 2002 Andras Kis-Szabo - * - new extension header parser code - * 15 Oct 2005 Harald Welte - * - Unification of {ip,ip6}_tables into x_tables - * - Removed tcp and udp code, since it's not ipv6 specific */ #include @@ -115,7 +106,7 @@ ip6_packet_match(const struct sk_buff *s { size_t i; unsigned long ret; - const struct ipv6hdr *ipv6 = skb->nh.ipv6h; + const struct ipv6hdr *ipv6 = ipv6_hdr(skb); #define FWINV(bool,invflg) ((bool) ^ !!(ip6info->invflags & invflg)) @@ -301,7 +292,7 @@ ip6t_do_table(struct sk_buff **pskb, goto no_match; ADD_COUNTER(e->counters, - ntohs((*pskb)->nh.ipv6h->payload_len) + ntohs(ipv6_hdr(*pskb)->payload_len) + IPV6_HDR_LEN, 1); @@ -1448,8 +1439,8 @@ static void __exit ip6_tables_fini(void) int ipv6_find_hdr(const struct sk_buff *skb, unsigned int *offset, int target, unsigned short *fragoff) { - unsigned int start = (u8*)(skb->nh.ipv6h + 1) - skb->data; - u8 nexthdr = skb->nh.ipv6h->nexthdr; + unsigned int start = skb_network_offset(skb) + sizeof(struct ipv6hdr); + u8 nexthdr = ipv6_hdr(skb)->nexthdr; unsigned int len = skb->len - start; if (fragoff) diff -puN net/ipv6/netfilter/ip6t_HL.c~git-net net/ipv6/netfilter/ip6t_HL.c --- a/net/ipv6/netfilter/ip6t_HL.c~git-net +++ a/net/ipv6/netfilter/ip6t_HL.c @@ -32,7 +32,7 @@ static unsigned int ip6t_hl_target(struc if (!skb_make_writable(pskb, (*pskb)->len)) return NF_DROP; - ip6h = (*pskb)->nh.ipv6h; + ip6h = ipv6_hdr(*pskb); switch (info->mode) { case IP6T_HL_SET: diff -puN net/ipv6/netfilter/ip6t_LOG.c~git-net net/ipv6/netfilter/ip6t_LOG.c --- a/net/ipv6/netfilter/ip6t_LOG.c~git-net +++ a/net/ipv6/netfilter/ip6t_LOG.c @@ -396,8 +396,8 @@ ip6t_log_packet(unsigned int pf, /* MAC logging for input chain only. */ printk("MAC="); if (skb->dev && (len = skb->dev->hard_header_len) && - skb->mac.raw != skb->nh.raw) { - unsigned char *p = skb->mac.raw; + skb->mac_header != skb->network_header) { + const unsigned char *p = skb_mac_header(skb); int i; if (skb->dev->type == ARPHRD_SIT && @@ -412,7 +412,8 @@ ip6t_log_packet(unsigned int pf, printk(" "); if (skb->dev->type == ARPHRD_SIT) { - struct iphdr *iph = (struct iphdr *)skb->mac.raw; + const struct iphdr *iph = + (struct iphdr *)skb_mac_header(skb); printk("TUNNEL=%u.%u.%u.%u->%u.%u.%u.%u ", NIPQUAD(iph->saddr), NIPQUAD(iph->daddr)); @@ -421,7 +422,7 @@ ip6t_log_packet(unsigned int pf, printk(" "); } - dump_packet(loginfo, skb, (u8*)skb->nh.ipv6h - skb->data, 1); + dump_packet(loginfo, skb, skb_network_offset(skb), 1); printk("\n"); spin_unlock_bh(&log_lock); } @@ -489,14 +490,10 @@ static int __init ip6t_log_init(void) ret = xt_register_target(&ip6t_log_reg); if (ret < 0) return ret; - if (nf_log_register(PF_INET6, &ip6t_logger) < 0) { - printk(KERN_WARNING "ip6t_LOG: not logging via system console " - "since somebody else already registered for PF_INET6\n"); - /* we cannot make module load fail here, since otherwise - * ip6tables userspace would abort */ - } - - return 0; + ret = nf_log_register(PF_INET6, &ip6t_logger); + if (ret < 0 && ret != -EEXIST) + xt_unregister_target(&ip6t_log_reg); + return ret; } static void __exit ip6t_log_fini(void) diff -puN net/ipv6/netfilter/ip6t_REJECT.c~git-net net/ipv6/netfilter/ip6t_REJECT.c --- a/net/ipv6/netfilter/ip6t_REJECT.c~git-net +++ a/net/ipv6/netfilter/ip6t_REJECT.c @@ -47,7 +47,7 @@ static void send_reset(struct sk_buff *o struct tcphdr otcph, *tcph; unsigned int otcplen, hh_len; int tcphoff, needs_ack; - struct ipv6hdr *oip6h = oldskb->nh.ipv6h, *ip6h; + struct ipv6hdr *oip6h = ipv6_hdr(oldskb), *ip6h; struct dst_entry *dst = NULL; u8 proto; struct flowi fl; @@ -120,8 +120,9 @@ static void send_reset(struct sk_buff *o skb_reserve(nskb, hh_len + dst->header_len); - ip6h = nskb->nh.ipv6h = (struct ipv6hdr *) - skb_put(nskb, sizeof(struct ipv6hdr)); + skb_put(nskb, sizeof(struct ipv6hdr)); + skb_reset_network_header(nskb); + ip6h = ipv6_hdr(nskb); ip6h->version = 6; ip6h->hop_limit = dst_metric(dst, RTAX_HOPLIMIT); ip6h->nexthdr = IPPROTO_TCP; @@ -155,8 +156,8 @@ static void send_reset(struct sk_buff *o tcph->check = 0; /* Adjust TCP checksum */ - tcph->check = csum_ipv6_magic(&nskb->nh.ipv6h->saddr, - &nskb->nh.ipv6h->daddr, + tcph->check = csum_ipv6_magic(&ipv6_hdr(nskb)->saddr, + &ipv6_hdr(nskb)->daddr, sizeof(struct tcphdr), IPPROTO_TCP, csum_partial((char *)tcph, sizeof(struct tcphdr), 0)); diff -puN net/ipv6/netfilter/ip6t_eui64.c~git-net net/ipv6/netfilter/ip6t_eui64.c --- a/net/ipv6/netfilter/ip6t_eui64.c~git-net +++ a/net/ipv6/netfilter/ip6t_eui64.c @@ -32,8 +32,8 @@ match(const struct sk_buff *skb, unsigned char eui64[8]; int i = 0; - if (!(skb->mac.raw >= skb->head && - (skb->mac.raw + ETH_HLEN) <= skb->data) && + if (!(skb_mac_header(skb) >= skb->head && + (skb_mac_header(skb) + ETH_HLEN) <= skb->data) && offset != 0) { *hotdrop = 1; return 0; @@ -42,7 +42,7 @@ match(const struct sk_buff *skb, memset(eui64, 0, sizeof(eui64)); if (eth_hdr(skb)->h_proto == htons(ETH_P_IPV6)) { - if (skb->nh.ipv6h->version == 0x6) { + if (ipv6_hdr(skb)->version == 0x6) { memcpy(eui64, eth_hdr(skb)->h_source, 3); memcpy(eui64 + 5, eth_hdr(skb)->h_source + 3, 3); eui64[3] = 0xff; @@ -50,7 +50,7 @@ match(const struct sk_buff *skb, eui64[0] |= 0x02; i = 0; - while ((skb->nh.ipv6h->saddr.s6_addr[8+i] == eui64[i]) + while ((ipv6_hdr(skb)->saddr.s6_addr[8 + i] == eui64[i]) && (i < 8)) i++; diff -puN net/ipv6/netfilter/ip6t_hl.c~git-net net/ipv6/netfilter/ip6t_hl.c --- a/net/ipv6/netfilter/ip6t_hl.c~git-net +++ a/net/ipv6/netfilter/ip6t_hl.c @@ -25,7 +25,7 @@ static int match(const struct sk_buff *s int offset, unsigned int protoff, int *hotdrop) { const struct ip6t_hl_info *info = matchinfo; - const struct ipv6hdr *ip6h = skb->nh.ipv6h; + const struct ipv6hdr *ip6h = ipv6_hdr(skb); switch (info->mode) { case IP6T_HL_EQ: diff -puN net/ipv6/netfilter/ip6t_ipv6header.c~git-net net/ipv6/netfilter/ip6t_ipv6header.c --- a/net/ipv6/netfilter/ip6t_ipv6header.c~git-net +++ a/net/ipv6/netfilter/ip6t_ipv6header.c @@ -45,7 +45,7 @@ ipv6header_match(const struct sk_buff *s /* Make sure this isn't an evil packet */ /* type of the 1st exthdr */ - nexthdr = skb->nh.ipv6h->nexthdr; + nexthdr = ipv6_hdr(skb)->nexthdr; /* pointer to the 1st exthdr */ ptr = sizeof(struct ipv6hdr); /* available length */ diff -puN net/ipv6/netfilter/ip6table_filter.c~git-net net/ipv6/netfilter/ip6table_filter.c --- a/net/ipv6/netfilter/ip6table_filter.c~git-net +++ a/net/ipv6/netfilter/ip6table_filter.c @@ -102,7 +102,7 @@ ip6t_local_out_hook(unsigned int hook, #if 0 /* root is playing with raw sockets. */ if ((*pskb)->len < sizeof(struct iphdr) - || (*pskb)->nh.iph->ihl * 4 < sizeof(struct iphdr)) { + || ip_hdrlen(*pskb) < sizeof(struct iphdr)) { if (net_ratelimit()) printk("ip6t_hook: happy cracking.\n"); return NF_ACCEPT; diff -puN net/ipv6/netfilter/ip6table_mangle.c~git-net net/ipv6/netfilter/ip6table_mangle.c --- a/net/ipv6/netfilter/ip6table_mangle.c~git-net +++ a/net/ipv6/netfilter/ip6table_mangle.c @@ -7,8 +7,6 @@ * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as * published by the Free Software Foundation. - * - * Extended to all five netfilter hooks by Brad Chapman & Harald Welte */ #include #include @@ -138,7 +136,7 @@ ip6t_local_hook(unsigned int hook, #if 0 /* root is playing with raw sockets. */ if ((*pskb)->len < sizeof(struct iphdr) - || (*pskb)->nh.iph->ihl * 4 < sizeof(struct iphdr)) { + || ip_hdrlen(*pskb) < sizeof(struct iphdr)) { if (net_ratelimit()) printk("ip6t_hook: happy cracking.\n"); return NF_ACCEPT; @@ -146,21 +144,21 @@ ip6t_local_hook(unsigned int hook, #endif /* save source/dest address, mark, hoplimit, flowlabel, priority, */ - memcpy(&saddr, &(*pskb)->nh.ipv6h->saddr, sizeof(saddr)); - memcpy(&daddr, &(*pskb)->nh.ipv6h->daddr, sizeof(daddr)); + memcpy(&saddr, &ipv6_hdr(*pskb)->saddr, sizeof(saddr)); + memcpy(&daddr, &ipv6_hdr(*pskb)->daddr, sizeof(daddr)); mark = (*pskb)->mark; - hop_limit = (*pskb)->nh.ipv6h->hop_limit; + hop_limit = ipv6_hdr(*pskb)->hop_limit; /* flowlabel and prio (includes version, which shouldn't change either */ - flowlabel = *((u_int32_t *) (*pskb)->nh.ipv6h); + flowlabel = *((u_int32_t *)ipv6_hdr(*pskb)); ret = ip6t_do_table(pskb, hook, in, out, &packet_mangler); if (ret != NF_DROP && ret != NF_STOLEN - && (memcmp(&(*pskb)->nh.ipv6h->saddr, &saddr, sizeof(saddr)) - || memcmp(&(*pskb)->nh.ipv6h->daddr, &daddr, sizeof(daddr)) + && (memcmp(&ipv6_hdr(*pskb)->saddr, &saddr, sizeof(saddr)) + || memcmp(&ipv6_hdr(*pskb)->daddr, &daddr, sizeof(daddr)) || (*pskb)->mark != mark - || (*pskb)->nh.ipv6h->hop_limit != hop_limit)) + || ipv6_hdr(*pskb)->hop_limit != hop_limit)) return ip6_route_me_harder(*pskb) == 0 ? ret : NF_DROP; return ret; diff -puN net/ipv6/netfilter/nf_conntrack_l3proto_ipv6.c~git-net net/ipv6/netfilter/nf_conntrack_l3proto_ipv6.c --- a/net/ipv6/netfilter/nf_conntrack_l3proto_ipv6.c~git-net +++ a/net/ipv6/netfilter/nf_conntrack_l3proto_ipv6.c @@ -7,17 +7,6 @@ * * Author: * Yasuyuki Kozakai @USAGI - * - * 16 Dec 2003: Yasuyuki Kozakai @USAGI - * - support Layer 3 protocol independent connection tracking. - * Based on the original ip_conntrack code which had the following - * copyright information: - * (C) 1999-2001 Paul `Rusty' Russell - * (C) 2002-2004 Netfilter Core Team - * - * 23 Mar 2004: Yasuyuki Kozakai @USAGI - * - add get_features() to support various size of conntrack - * structures. */ #include @@ -138,16 +127,10 @@ static int ipv6_prepare(struct sk_buff **pskb, unsigned int hooknum, unsigned int *dataoff, u_int8_t *protonum) { - unsigned int extoff; - unsigned char pnum; - int protoff; - - extoff = (u8*)((*pskb)->nh.ipv6h + 1) - (*pskb)->data; - pnum = (*pskb)->nh.ipv6h->nexthdr; - - protoff = nf_ct_ipv6_skip_exthdr(*pskb, extoff, &pnum, - (*pskb)->len - extoff); - + unsigned int extoff = (u8 *)(ipv6_hdr(*pskb) + 1) - (*pskb)->data; + unsigned char pnum = ipv6_hdr(*pskb)->nexthdr; + int protoff = nf_ct_ipv6_skip_exthdr(*pskb, extoff, &pnum, + (*pskb)->len - extoff); /* * (protoff == (*pskb)->len) mean that the packet doesn't have no data * except of IPv6 & ext headers. but it's tracked anyway. - YK @@ -179,9 +162,8 @@ static unsigned int ipv6_confirm(unsigne struct nf_conn_help *help; enum ip_conntrack_info ctinfo; unsigned int ret, protoff; - unsigned int extoff = (u8*)((*pskb)->nh.ipv6h + 1) - - (*pskb)->data; - unsigned char pnum = (*pskb)->nh.ipv6h->nexthdr; + unsigned int extoff = (u8 *)(ipv6_hdr(*pskb) + 1) - (*pskb)->data; + unsigned char pnum = ipv6_hdr(*pskb)->nexthdr; /* This is where we call the helper: as the packet goes out. */ diff -puN net/ipv6/netfilter/nf_conntrack_proto_icmpv6.c~git-net net/ipv6/netfilter/nf_conntrack_proto_icmpv6.c --- a/net/ipv6/netfilter/nf_conntrack_proto_icmpv6.c~git-net +++ a/net/ipv6/netfilter/nf_conntrack_proto_icmpv6.c @@ -7,13 +7,6 @@ * * Author: * Yasuyuki Kozakai @USAGI - * - * 16 Dec 2003: Yasuyuki Kozakai @USAGI - * - ICMPv6 tracking support. Derived from the original ip_conntrack code - * net/ipv4/netfilter/ip_conntrack_proto_icmp.c which had the following - * copyright information: - * (C) 1999-2001 Paul `Rusty' Russell - * (C) 2002-2004 Netfilter Core Team */ #include diff -puN net/ipv6/netfilter/nf_conntrack_reasm.c~git-net net/ipv6/netfilter/nf_conntrack_reasm.c --- a/net/ipv6/netfilter/nf_conntrack_reasm.c~git-net +++ a/net/ipv6/netfilter/nf_conntrack_reasm.c @@ -82,7 +82,7 @@ struct nf_ct_frag6_queue struct sk_buff *fragments; int len; int meat; - struct timeval stamp; + ktime_t stamp; unsigned int csum; __u8 last_in; /* has first/last segment arrived? */ #define COMPLETE 4 @@ -353,9 +353,7 @@ nf_ct_frag6_create(unsigned int hash, __ ipv6_addr_copy(&fq->saddr, src); ipv6_addr_copy(&fq->daddr, dst); - init_timer(&fq->timer); - fq->timer.function = nf_ct_frag6_expire; - fq->timer.data = (long) fq; + setup_timer(&fq->timer, nf_ct_frag6_expire, (unsigned long)fq); spin_lock_init(&fq->lock); atomic_set(&fq->refcnt, 1); @@ -400,19 +398,20 @@ static int nf_ct_frag6_queue(struct nf_c } offset = ntohs(fhdr->frag_off) & ~0x7; - end = offset + (ntohs(skb->nh.ipv6h->payload_len) - - ((u8 *) (fhdr + 1) - (u8 *) (skb->nh.ipv6h + 1))); + end = offset + (ntohs(ipv6_hdr(skb)->payload_len) - + ((u8 *)(fhdr + 1) - (u8 *)(ipv6_hdr(skb) + 1))); if ((unsigned int)end > IPV6_MAXPLEN) { DEBUGP("offset is too large.\n"); return -1; } - if (skb->ip_summed == CHECKSUM_COMPLETE) + if (skb->ip_summed == CHECKSUM_COMPLETE) { + const unsigned char *nh = skb_network_header(skb); skb->csum = csum_sub(skb->csum, - csum_partial(skb->nh.raw, - (u8*)(fhdr + 1) - skb->nh.raw, + csum_partial(nh, (u8 *)(fhdr + 1) - nh, 0)); + } /* Is this the final fragment? */ if (!(fhdr->frag_off & htons(IP6_MF))) { @@ -542,7 +541,7 @@ static int nf_ct_frag6_queue(struct nf_c fq->fragments = skb; skb->dev = NULL; - skb_get_timestamp(skb, &fq->stamp); + fq->stamp = skb->tstamp; fq->meat += skb->len; atomic_add(skb->truesize, &nf_ct_frag6_mem); @@ -583,7 +582,9 @@ nf_ct_frag6_reasm(struct nf_ct_frag6_que BUG_TRAP(NFCT_FRAG6_CB(head)->offset == 0); /* Unfragmented part is taken from the first segment. */ - payload_len = (head->data - head->nh.raw) - sizeof(struct ipv6hdr) + fq->len - sizeof(struct frag_hdr); + payload_len = ((head->data - skb_network_header(head)) - + sizeof(struct ipv6hdr) + fq->len - + sizeof(struct frag_hdr)); if (payload_len > IPV6_MAXPLEN) { DEBUGP("payload len is too large.\n"); goto out_oversize; @@ -624,15 +625,15 @@ nf_ct_frag6_reasm(struct nf_ct_frag6_que /* We have to remove fragment header from datagram and to relocate * header in order to calculate ICV correctly. */ - head->nh.raw[fq->nhoffset] = head->h.raw[0]; + skb_network_header(head)[fq->nhoffset] = skb_transport_header(head)[0]; memmove(head->head + sizeof(struct frag_hdr), head->head, (head->data - head->head) - sizeof(struct frag_hdr)); - head->mac.raw += sizeof(struct frag_hdr); - head->nh.raw += sizeof(struct frag_hdr); + head->mac_header += sizeof(struct frag_hdr); + head->network_header += sizeof(struct frag_hdr); skb_shinfo(head)->frag_list = head->next; - head->h.raw = head->data; - skb_push(head, head->data - head->nh.raw); + skb_reset_transport_header(head); + skb_push(head, head->data - skb_network_header(head)); atomic_sub(head->truesize, &nf_ct_frag6_mem); for (fp=head->next; fp; fp = fp->next) { @@ -648,12 +649,14 @@ nf_ct_frag6_reasm(struct nf_ct_frag6_que head->next = NULL; head->dev = dev; - skb_set_timestamp(head, &fq->stamp); - head->nh.ipv6h->payload_len = htons(payload_len); + head->tstamp = fq->stamp; + ipv6_hdr(head)->payload_len = htons(payload_len); /* Yes, and fold redundant checksum back. 8) */ if (head->ip_summed == CHECKSUM_COMPLETE) - head->csum = csum_partial(head->nh.raw, head->h.raw-head->nh.raw, head->csum); + head->csum = csum_partial(skb_network_header(head), + skb_network_header_len(head), + head->csum); fq->fragments = NULL; @@ -701,9 +704,10 @@ out_fail: static int find_prev_fhdr(struct sk_buff *skb, u8 *prevhdrp, int *prevhoff, int *fhoff) { - u8 nexthdr = skb->nh.ipv6h->nexthdr; - u8 prev_nhoff = (u8 *)&skb->nh.ipv6h->nexthdr - skb->data; - int start = (u8 *)(skb->nh.ipv6h+1) - skb->data; + u8 nexthdr = ipv6_hdr(skb)->nexthdr; + const int netoff = skb_network_offset(skb); + u8 prev_nhoff = netoff + offsetof(struct ipv6hdr, nexthdr); + int start = netoff + sizeof(struct ipv6hdr); int len = skb->len - start; u8 prevhdr = NEXTHDR_IPV6; @@ -759,7 +763,7 @@ struct sk_buff *nf_ct_frag6_gather(struc struct sk_buff *ret_skb = NULL; /* Jumbo payload inhibits frag. header */ - if (skb->nh.ipv6h->payload_len == 0) { + if (ipv6_hdr(skb)->payload_len == 0) { DEBUGP("payload len = 0\n"); return skb; } @@ -780,9 +784,9 @@ struct sk_buff *nf_ct_frag6_gather(struc goto ret_orig; } - clone->h.raw = clone->data + fhoff; - hdr = clone->nh.ipv6h; - fhdr = (struct frag_hdr *)clone->h.raw; + skb_set_transport_header(clone, fhoff); + hdr = ipv6_hdr(clone); + fhdr = (struct frag_hdr *)skb_transport_header(clone); if (!(fhdr->frag_off & htons(0xFFF9))) { DEBUGP("Invalid fragment offset\n"); @@ -864,8 +868,7 @@ int nf_ct_frag6_init(void) nf_ct_frag6_hash_rnd = (u32) ((num_physpages ^ (num_physpages>>7)) ^ (jiffies ^ (jiffies >> 6))); - init_timer(&nf_ct_frag6_secret_timer); - nf_ct_frag6_secret_timer.function = nf_ct_frag6_secret_rebuild; + setup_timer(&nf_ct_frag6_secret_timer, nf_ct_frag6_secret_rebuild, 0); nf_ct_frag6_secret_timer.expires = jiffies + nf_ct_frag6_secret_interval; add_timer(&nf_ct_frag6_secret_timer); diff -puN net/ipv6/proc.c~git-net net/ipv6/proc.c --- a/net/ipv6/proc.c~git-net +++ a/net/ipv6/proc.c @@ -23,6 +23,7 @@ #include #include #include +#include #include #include #include @@ -141,6 +142,7 @@ static struct snmp_mib snmp6_udplite6_li SNMP_MIB_ITEM("UdpLite6OutDatagrams", UDP_MIB_OUTDATAGRAMS), SNMP_MIB_SENTINEL }; +#endif /* CONFIG_PROC_FS */ static unsigned long fold_field(void *mib[], int offt) @@ -155,6 +157,7 @@ fold_field(void *mib[], int offt) return res; } +#ifdef CONFIG_PROC_FS static inline void snmp6_seq_show_item(struct seq_file *seq, void **mib, struct snmp_mib *itemlist) { @@ -206,7 +209,37 @@ static const struct file_operations snmp .llseek = seq_lseek, .release = single_release, }; +#endif /* CONFIG_PROC_FS */ +static inline void +__snmp6_fill_stats(u64 *stats, void **mib, int items, int bytes) +{ + int i; + int pad = bytes - sizeof(u64) * items; + BUG_ON(pad < 0); + + /* Use put_unaligned() because stats may not be aligned for u64. */ + put_unaligned(items, &stats[0]); + for (i = 1; i < items; i++) + put_unaligned(fold_field(mib, i), &stats[i]); + + memset(&stats[items], 0, pad); +} + +void +snmp6_fill_stats(u64 *stats, struct inet6_dev *idev, int attrtype, int bytes) +{ + switch(attrtype) { + case IFLA_INET6_STATS: + __snmp6_fill_stats(stats, (void **)idev->stats.ipv6, IPSTATS_MIB_MAX, bytes); + break; + case IFLA_INET6_ICMP6STATS: + __snmp6_fill_stats(stats, (void **)idev->stats.icmpv6, ICMP6_MIB_MAX, bytes); + break; + } +} + +#ifdef CONFIG_PROC_FS int snmp6_register_dev(struct inet6_dev *idev) { struct proc_dir_entry *p; @@ -283,6 +316,7 @@ int snmp6_unregister_dev(struct inet6_de { return 0; } + #endif /* CONFIG_PROC_FS */ int snmp6_alloc_dev(struct inet6_dev *idev) @@ -314,4 +348,34 @@ int snmp6_free_dev(struct inet6_dev *ide return 0; } +int snmp6_mib_init(void *ptr[2], size_t mibsize, size_t mibalign) +{ + if (ptr == NULL) + return -EINVAL; + + ptr[0] = __alloc_percpu(mibsize); + if (!ptr[0]) + goto err0; + + ptr[1] = __alloc_percpu(mibsize); + if (!ptr[1]) + goto err1; + + return 0; + +err1: + free_percpu(ptr[0]); + ptr[0] = NULL; +err0: + return -ENOMEM; +} + +void snmp6_mib_free(void *ptr[2]) +{ + if (ptr == NULL) + return; + free_percpu(ptr[0]); + free_percpu(ptr[1]); + ptr[0] = ptr[1] = NULL; +} diff -puN net/ipv6/protocol.c~git-net net/ipv6/protocol.c --- a/net/ipv6/protocol.c~git-net +++ a/net/ipv6/protocol.c @@ -60,6 +60,8 @@ int inet6_add_protocol(struct inet6_prot return ret; } +EXPORT_SYMBOL(inet6_add_protocol); + /* * Remove a protocol from the hash tables. */ @@ -83,3 +85,5 @@ int inet6_del_protocol(struct inet6_prot return ret; } + +EXPORT_SYMBOL(inet6_del_protocol); diff -puN net/ipv6/raw.c~git-net net/ipv6/raw.c --- a/net/ipv6/raw.c~git-net +++ a/net/ipv6/raw.c @@ -152,7 +152,7 @@ int ipv6_raw_deliver(struct sk_buff *skb int delivered = 0; __u8 hash; - saddr = &skb->nh.ipv6h->saddr; + saddr = &ipv6_hdr(skb)->saddr; daddr = saddr + 1; hash = nexthdr & (MAX_INET_PROTOS - 1); @@ -361,17 +361,18 @@ int rawv6_rcv(struct sock *sk, struct sk skb->ip_summed = CHECKSUM_UNNECESSARY; if (skb->ip_summed == CHECKSUM_COMPLETE) { - skb_postpull_rcsum(skb, skb->nh.raw, - skb->h.raw - skb->nh.raw); - if (!csum_ipv6_magic(&skb->nh.ipv6h->saddr, - &skb->nh.ipv6h->daddr, + skb_postpull_rcsum(skb, skb_network_header(skb), + skb_network_header_len(skb)); + if (!csum_ipv6_magic(&ipv6_hdr(skb)->saddr, + &ipv6_hdr(skb)->daddr, skb->len, inet->num, skb->csum)) skb->ip_summed = CHECKSUM_UNNECESSARY; } - if (skb->ip_summed != CHECKSUM_UNNECESSARY) - skb->csum = ~csum_unfold(csum_ipv6_magic(&skb->nh.ipv6h->saddr, - &skb->nh.ipv6h->daddr, - skb->len, inet->num, 0)); + if (!skb_csum_unnecessary(skb)) + skb->csum = ~csum_unfold(csum_ipv6_magic(&ipv6_hdr(skb)->saddr, + &ipv6_hdr(skb)->daddr, + skb->len, + inet->num, 0)); if (inet->hdrincl) { if (skb_checksum_complete(skb)) { @@ -420,7 +421,7 @@ static int rawv6_recvmsg(struct kiocb *i msg->msg_flags |= MSG_TRUNC; } - if (skb->ip_summed==CHECKSUM_UNNECESSARY) { + if (skb_csum_unnecessary(skb)) { err = skb_copy_datagram_iovec(skb, 0, msg->msg_iov, copied); } else if (msg->msg_flags&MSG_TRUNC) { if (__skb_checksum_complete(skb)) @@ -438,7 +439,7 @@ static int rawv6_recvmsg(struct kiocb *i if (sin6) { sin6->sin6_family = AF_INET6; sin6->sin6_port = 0; - ipv6_addr_copy(&sin6->sin6_addr, &skb->nh.ipv6h->saddr); + ipv6_addr_copy(&sin6->sin6_addr, &ipv6_hdr(skb)->saddr); sin6->sin6_flowinfo = 0; sin6->sin6_scope_id = 0; if (ipv6_addr_type(&sin6->sin6_addr) & IPV6_ADDR_LINKLOCAL) @@ -488,7 +489,8 @@ static int rawv6_push_pending_frames(str goto out; offset = rp->offset; - total_len = inet_sk(sk)->cork.length - (skb->nh.raw - skb->data); + total_len = inet_sk(sk)->cork.length - (skb_network_header(skb) - + skb->data); if (offset >= total_len - 1) { err = -EINVAL; ip6_flush_pending_frames(sk); @@ -511,7 +513,7 @@ static int rawv6_push_pending_frames(str if (csum_skb) continue; - len = skb->len - (skb->h.raw - skb->data); + len = skb->len - skb_transport_offset(skb); if (offset >= len) { offset -= len; continue; @@ -523,7 +525,7 @@ static int rawv6_push_pending_frames(str skb = csum_skb; } - offset += skb->h.raw - skb->data; + offset += skb_transport_offset(skb); if (skb_copy_bits(skb, offset, &csum, 2)) BUG(); @@ -575,11 +577,13 @@ static int rawv6_send_hdrinc(struct sock skb->priority = sk->sk_priority; skb->dst = dst_clone(&rt->u.dst); - skb->nh.ipv6h = iph = (struct ipv6hdr *)skb_put(skb, length); + skb_put(skb, length); + skb_reset_network_header(skb); + iph = ipv6_hdr(skb); skb->ip_summed = CHECKSUM_NONE; - skb->h.raw = skb->nh.raw; + skb->transport_header = skb->network_header; err = memcpy_fromiovecend((void *)iph, from, 0, length); if (err) goto error_fault; @@ -878,7 +882,7 @@ static int rawv6_seticmpfilter(struct so return 0; default: return -ENOPROTOOPT; - }; + } return 0; } @@ -903,7 +907,7 @@ static int rawv6_geticmpfilter(struct so return 0; default: return -ENOPROTOOPT; - }; + } return 0; } @@ -957,7 +961,8 @@ static int rawv6_setsockopt(struct sock default: return ipv6_setsockopt(sk, level, optname, optval, optlen); - }; + } + return do_rawv6_setsockopt(sk, level, optname, optval, optlen); } @@ -978,7 +983,7 @@ static int compat_rawv6_setsockopt(struc default: return compat_ipv6_setsockopt(sk, level, optname, optval, optlen); - }; + } return do_rawv6_setsockopt(sk, level, optname, optval, optlen); } #endif @@ -1031,7 +1036,8 @@ static int rawv6_getsockopt(struct sock default: return ipv6_getsockopt(sk, level, optname, optval, optlen); - }; + } + return do_rawv6_getsockopt(sk, level, optname, optval, optlen); } @@ -1052,7 +1058,7 @@ static int compat_rawv6_getsockopt(struc default: return compat_ipv6_getsockopt(sk, level, optname, optval, optlen); - }; + } return do_rawv6_getsockopt(sk, level, optname, optval, optlen); } #endif @@ -1073,7 +1079,7 @@ static int rawv6_ioctl(struct sock *sk, spin_lock_bh(&sk->sk_receive_queue.lock); skb = skb_peek(&sk->sk_receive_queue); if (skb != NULL) - amount = skb->tail - skb->h.raw; + amount = skb->tail - skb->transport_header; spin_unlock_bh(&sk->sk_receive_queue.lock); return put_user(amount, (int __user *)arg); } diff -puN net/ipv6/reassembly.c~git-net net/ipv6/reassembly.c --- a/net/ipv6/reassembly.c~git-net +++ a/net/ipv6/reassembly.c @@ -88,7 +88,7 @@ struct frag_queue int len; int meat; int iif; - struct timeval stamp; + ktime_t stamp; unsigned int csum; __u8 last_in; /* has first/last segment arrived? */ #define COMPLETE 4 @@ -430,19 +430,24 @@ static void ip6_frag_queue(struct frag_q goto err; offset = ntohs(fhdr->frag_off) & ~0x7; - end = offset + (ntohs(skb->nh.ipv6h->payload_len) - - ((u8 *) (fhdr + 1) - (u8 *) (skb->nh.ipv6h + 1))); + end = offset + (ntohs(ipv6_hdr(skb)->payload_len) - + ((u8 *)(fhdr + 1) - (u8 *)(ipv6_hdr(skb) + 1))); if ((unsigned int)end > IPV6_MAXPLEN) { IP6_INC_STATS_BH(ip6_dst_idev(skb->dst), IPSTATS_MIB_INHDRERRORS); - icmpv6_param_prob(skb,ICMPV6_HDR_FIELD, (u8*)&fhdr->frag_off - skb->nh.raw); + icmpv6_param_prob(skb, ICMPV6_HDR_FIELD, + ((u8 *)&fhdr->frag_off - + skb_network_header(skb))); return; } - if (skb->ip_summed == CHECKSUM_COMPLETE) + if (skb->ip_summed == CHECKSUM_COMPLETE) { + const unsigned char *nh = skb_network_header(skb); skb->csum = csum_sub(skb->csum, - csum_partial(skb->nh.raw, (u8*)(fhdr+1)-skb->nh.raw, 0)); + csum_partial(nh, (u8 *)(fhdr + 1) - nh, + 0)); + } /* Is this the final fragment? */ if (!(fhdr->frag_off & htons(IP6_MF))) { @@ -562,7 +567,7 @@ static void ip6_frag_queue(struct frag_q if (skb->dev) fq->iif = skb->dev->ifindex; skb->dev = NULL; - skb_get_timestamp(skb, &fq->stamp); + fq->stamp = skb->tstamp; fq->meat += skb->len; atomic_add(skb->truesize, &ip6_frag_mem); @@ -605,7 +610,9 @@ static int ip6_frag_reasm(struct frag_qu BUG_TRAP(FRAG6_CB(head)->offset == 0); /* Unfragmented part is taken from the first segment. */ - payload_len = (head->data - head->nh.raw) - sizeof(struct ipv6hdr) + fq->len - sizeof(struct frag_hdr); + payload_len = ((head->data - skb_network_header(head)) - + sizeof(struct ipv6hdr) + fq->len - + sizeof(struct frag_hdr)); if (payload_len > IPV6_MAXPLEN) goto out_oversize; @@ -639,15 +646,15 @@ static int ip6_frag_reasm(struct frag_qu /* We have to remove fragment header from datagram and to relocate * header in order to calculate ICV correctly. */ nhoff = fq->nhoffset; - head->nh.raw[nhoff] = head->h.raw[0]; + skb_network_header(head)[nhoff] = skb_transport_header(head)[0]; memmove(head->head + sizeof(struct frag_hdr), head->head, (head->data - head->head) - sizeof(struct frag_hdr)); - head->mac.raw += sizeof(struct frag_hdr); - head->nh.raw += sizeof(struct frag_hdr); + head->mac_header += sizeof(struct frag_hdr); + head->network_header += sizeof(struct frag_hdr); skb_shinfo(head)->frag_list = head->next; - head->h.raw = head->data; - skb_push(head, head->data - head->nh.raw); + skb_reset_transport_header(head); + skb_push(head, head->data - skb_network_header(head)); atomic_sub(head->truesize, &ip6_frag_mem); for (fp=head->next; fp; fp = fp->next) { @@ -663,15 +670,17 @@ static int ip6_frag_reasm(struct frag_qu head->next = NULL; head->dev = dev; - skb_set_timestamp(head, &fq->stamp); - head->nh.ipv6h->payload_len = htons(payload_len); + head->tstamp = fq->stamp; + ipv6_hdr(head)->payload_len = htons(payload_len); IP6CB(head)->nhoff = nhoff; *skb_in = head; /* Yes, and fold redundant checksum back. 8) */ if (head->ip_summed == CHECKSUM_COMPLETE) - head->csum = csum_partial(head->nh.raw, head->h.raw-head->nh.raw, head->csum); + head->csum = csum_partial(skb_network_header(head), + skb_network_header_len(head), + head->csum); rcu_read_lock(); IP6_INC_STATS_BH(__in6_dev_get(dev), IPSTATS_MIB_REASMOKS); @@ -699,33 +708,34 @@ static int ipv6_frag_rcv(struct sk_buff struct net_device *dev = skb->dev; struct frag_hdr *fhdr; struct frag_queue *fq; - struct ipv6hdr *hdr; - - hdr = skb->nh.ipv6h; + struct ipv6hdr *hdr = ipv6_hdr(skb); IP6_INC_STATS_BH(ip6_dst_idev(skb->dst), IPSTATS_MIB_REASMREQDS); /* Jumbo payload inhibits frag. header */ if (hdr->payload_len==0) { IP6_INC_STATS(ip6_dst_idev(skb->dst), IPSTATS_MIB_INHDRERRORS); - icmpv6_param_prob(skb, ICMPV6_HDR_FIELD, skb->h.raw-skb->nh.raw); + icmpv6_param_prob(skb, ICMPV6_HDR_FIELD, + skb_network_header_len(skb)); return -1; } - if (!pskb_may_pull(skb, (skb->h.raw-skb->data)+sizeof(struct frag_hdr))) { + if (!pskb_may_pull(skb, (skb_transport_offset(skb) + + sizeof(struct frag_hdr)))) { IP6_INC_STATS(ip6_dst_idev(skb->dst), IPSTATS_MIB_INHDRERRORS); - icmpv6_param_prob(skb, ICMPV6_HDR_FIELD, skb->h.raw-skb->nh.raw); + icmpv6_param_prob(skb, ICMPV6_HDR_FIELD, + skb_network_header_len(skb)); return -1; } - hdr = skb->nh.ipv6h; - fhdr = (struct frag_hdr *)skb->h.raw; + hdr = ipv6_hdr(skb); + fhdr = (struct frag_hdr *)skb_transport_header(skb); if (!(fhdr->frag_off & htons(0xFFF9))) { /* It is not a fragmented frame */ - skb->h.raw += sizeof(struct frag_hdr); + skb->transport_header += sizeof(struct frag_hdr); IP6_INC_STATS_BH(ip6_dst_idev(skb->dst), IPSTATS_MIB_REASMOKS); - IP6CB(skb)->nhoff = (u8*)fhdr - skb->nh.raw; + IP6CB(skb)->nhoff = (u8 *)fhdr - skb_network_header(skb); return 1; } diff -puN net/ipv6/route.c~git-net net/ipv6/route.c --- a/net/ipv6/route.c~git-net +++ a/net/ipv6/route.c @@ -575,6 +575,8 @@ struct rt6_info *rt6_lookup(struct in6_a return NULL; } +EXPORT_SYMBOL(rt6_lookup); + /* ip6_ins_rt is called with FREE table->tb6_lock. It takes new route entry, the addition fails by any reason the route is freed. In any case, if caller does not hold it, it may @@ -724,7 +726,7 @@ out2: void ip6_route_input(struct sk_buff *skb) { - struct ipv6hdr *iph = skb->nh.ipv6h; + struct ipv6hdr *iph = ipv6_hdr(skb); int flags = RT6_LOOKUP_F_HAS_SADDR; struct flowi fl = { .iif = skb->dev->ifindex, @@ -829,6 +831,7 @@ struct dst_entry * ip6_route_output(stru return fib6_rule_lookup(fl, flags, ip6_pol_route_output); } +EXPORT_SYMBOL(ip6_route_output); /* * Destination cache support functions @@ -1757,7 +1760,7 @@ int ipv6_route_ioctl(unsigned int cmd, v rtnl_unlock(); return err; - }; + } return -EINVAL; } @@ -1772,7 +1775,7 @@ static inline int ip6_pkt_drop(struct sk int type; switch (ipstats_mib_noroutes) { case IPSTATS_MIB_INNOROUTES: - type = ipv6_addr_type(&skb->nh.ipv6h->daddr); + type = ipv6_addr_type(&ipv6_hdr(skb)->daddr); if (type == IPV6_ADDR_ANY || type == IPV6_ADDR_RESERVED) { IP6_INC_STATS(ip6_dst_idev(skb->dst), IPSTATS_MIB_INADDRERRORS); break; @@ -2012,7 +2015,7 @@ errout: return err; } -int inet6_rtm_delroute(struct sk_buff *skb, struct nlmsghdr* nlh, void *arg) +static int inet6_rtm_delroute(struct sk_buff *skb, struct nlmsghdr* nlh, void *arg) { struct fib6_config cfg; int err; @@ -2024,7 +2027,7 @@ int inet6_rtm_delroute(struct sk_buff *s return ip6_route_del(&cfg); } -int inet6_rtm_newroute(struct sk_buff *skb, struct nlmsghdr* nlh, void *arg) +static int inet6_rtm_newroute(struct sk_buff *skb, struct nlmsghdr* nlh, void *arg) { struct fib6_config cfg; int err; @@ -2161,7 +2164,7 @@ int rt6_dump_route(struct rt6_info *rt, prefix, NLM_F_MULTI); } -int inet6_rtm_getroute(struct sk_buff *in_skb, struct nlmsghdr* nlh, void *arg) +static int inet6_rtm_getroute(struct sk_buff *in_skb, struct nlmsghdr* nlh, void *arg) { struct nlattr *tb[RTA_MAX+1]; struct rt6_info *rt; @@ -2215,7 +2218,7 @@ int inet6_rtm_getroute(struct sk_buff *i /* Reserve room for dummy headers, this skb can pass through good chunk of routing engine. */ - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); skb_reserve(skb, MAX_HEADER + sizeof(struct ipv6hdr)); rt = (struct rt6_info*) ip6_route_output(NULL, &fl); @@ -2486,8 +2489,9 @@ ctl_table ipv6_route_table[] = { void __init ip6_route_init(void) { +#ifdef CONFIG_PROC_FS struct proc_dir_entry *p; - +#endif ip6_dst_ops.kmem_cachep = kmem_cache_create("ip6_dst_cache", sizeof(struct rt6_info), 0, SLAB_HWCACHE_ALIGN|SLAB_PANIC, NULL, NULL); @@ -2505,6 +2509,10 @@ void __init ip6_route_init(void) #ifdef CONFIG_IPV6_MULTIPLE_TABLES fib6_rules_init(); #endif + + __rtnl_register(PF_INET6, RTM_NEWROUTE, inet6_rtm_newroute, NULL); + __rtnl_register(PF_INET6, RTM_DELROUTE, inet6_rtm_delroute, NULL); + __rtnl_register(PF_INET6, RTM_GETROUTE, inet6_rtm_getroute, NULL); } void ip6_route_cleanup(void) diff -puN net/ipv6/sit.c~git-net net/ipv6/sit.c --- a/net/ipv6/sit.c~git-net +++ a/net/ipv6/sit.c @@ -224,8 +224,8 @@ static int ipip6_err(struct sk_buff *skb ICMP in the real Internet is absolutely infeasible. */ struct iphdr *iph = (struct iphdr*)skb->data; - int type = skb->h.icmph->type; - int code = skb->h.icmph->code; + const int type = icmp_hdr(skb)->type; + const int code = icmp_hdr(skb)->code; struct ip_tunnel *t; int err; @@ -280,8 +280,8 @@ out: struct iphdr *iph = (struct iphdr*)dp; int hlen = iph->ihl<<2; struct ipv6hdr *iph6; - int type = skb->h.icmph->type; - int code = skb->h.icmph->code; + const int type = icmp_hdr(skb)->type; + const int code = icmp_hdr(skb)->code; int rel_type = 0; int rel_code = 0; int rel_info = 0; @@ -296,14 +296,14 @@ out: default: return; case ICMP_PARAMETERPROB: - if (skb->h.icmph->un.gateway < hlen) + if (icmp_hdr(skb)->un.gateway < hlen) return; /* So... This guy found something strange INSIDE encapsulated packet. Well, he is fool, but what can we do ? */ rel_type = ICMPV6_PARAMPROB; - rel_info = skb->h.icmph->un.gateway - hlen; + rel_info = icmp_hdr(skb)->un.gateway - hlen; break; case ICMP_DEST_UNREACH: @@ -340,7 +340,7 @@ out: dst_release(skb2->dst); skb2->dst = NULL; skb_pull(skb2, skb->data - (u8*)iph6); - skb2->nh.raw = skb2->data; + skb_reset_network_header(skb2); /* Try to guess incoming interface */ rt6i = rt6_lookup(&iph6->saddr, NULL, NULL, 0); @@ -366,7 +366,7 @@ out: static inline void ipip6_ecn_decapsulate(struct iphdr *iph, struct sk_buff *skb) { if (INET_ECN_is_ce(iph->tos)) - IP6_ECN_set_ce(skb->nh.ipv6h); + IP6_ECN_set_ce(ipv6_hdr(skb)); } static int ipip6_rcv(struct sk_buff *skb) @@ -377,13 +377,13 @@ static int ipip6_rcv(struct sk_buff *skb if (!pskb_may_pull(skb, sizeof(struct ipv6hdr))) goto out; - iph = skb->nh.iph; + iph = ip_hdr(skb); read_lock(&ipip6_lock); if ((tunnel = ipip6_tunnel_lookup(iph->saddr, iph->daddr)) != NULL) { secpath_reset(skb); - skb->mac.raw = skb->nh.raw; - skb->nh.raw = skb->data; + skb->mac_header = skb->network_header; + skb_reset_network_header(skb); IPCB(skb)->flags = 0; skb->protocol = htons(ETH_P_IPV6); skb->pkt_type = PACKET_HOST; @@ -430,7 +430,7 @@ static int ipip6_tunnel_xmit(struct sk_b struct ip_tunnel *tunnel = netdev_priv(dev); struct net_device_stats *stats = &tunnel->stat; struct iphdr *tiph = &tunnel->parms.iph; - struct ipv6hdr *iph6 = skb->nh.ipv6h; + struct ipv6hdr *iph6 = ipv6_hdr(skb); u8 tos = tunnel->parms.iph.tos; struct rtable *rt; /* Route to the other host */ struct net_device *tdev; /* Device to other host */ @@ -468,7 +468,7 @@ static int ipip6_tunnel_xmit(struct sk_b addr_type = ipv6_addr_type(addr6); if (addr_type == IPV6_ADDR_ANY) { - addr6 = &skb->nh.ipv6h->daddr; + addr6 = &ipv6_hdr(skb)->daddr; addr_type = ipv6_addr_type(addr6); } @@ -550,11 +550,12 @@ static int ipip6_tunnel_xmit(struct sk_b skb_set_owner_w(new_skb, skb->sk); dev_kfree_skb(skb); skb = new_skb; - iph6 = skb->nh.ipv6h; + iph6 = ipv6_hdr(skb); } - skb->h.raw = skb->nh.raw; - skb->nh.raw = skb_push(skb, sizeof(struct iphdr)); + skb->transport_header = skb->network_header; + skb_push(skb, sizeof(struct iphdr)); + skb_reset_network_header(skb); memset(&(IPCB(skb)->opt), 0, sizeof(IPCB(skb)->opt)); IPCB(skb)->flags = 0; dst_release(skb->dst); @@ -564,7 +565,7 @@ static int ipip6_tunnel_xmit(struct sk_b * Push down and install the IPIP header. */ - iph = skb->nh.iph; + iph = ip_hdr(skb); iph->version = 4; iph->ihl = sizeof(struct iphdr)>>2; if (mtu > IPV6_MIN_MTU) diff -puN net/ipv6/tcp_ipv6.c~git-net net/ipv6/tcp_ipv6.c --- a/net/ipv6/tcp_ipv6.c~git-net +++ a/net/ipv6/tcp_ipv6.c @@ -115,10 +115,10 @@ static __inline__ __sum16 tcp_v6_check(s static __u32 tcp_v6_init_sequence(struct sk_buff *skb) { - return secure_tcpv6_sequence_number(skb->nh.ipv6h->daddr.s6_addr32, - skb->nh.ipv6h->saddr.s6_addr32, - skb->h.th->dest, - skb->h.th->source); + return secure_tcpv6_sequence_number(ipv6_hdr(skb)->daddr.s6_addr32, + ipv6_hdr(skb)->saddr.s6_addr32, + tcp_hdr(skb)->dest, + tcp_hdr(skb)->source); } static int tcp_v6_connect(struct sock *sk, struct sockaddr *uaddr, @@ -486,7 +486,9 @@ static int tcp_v6_send_synack(struct soc struct sk_buff *pktopts = treq->pktopts; struct inet6_skb_parm *rxopt = IP6CB(pktopts); if (rxopt->srcrt) - opt = ipv6_invert_rthdr(sk, (struct ipv6_rt_hdr*)(pktopts->nh.raw + rxopt->srcrt)); + opt = ipv6_invert_rthdr(sk, + (struct ipv6_rt_hdr *)(skb_network_header(pktopts) + + rxopt->srcrt)); } if (opt && opt->srcrt) { @@ -507,7 +509,7 @@ static int tcp_v6_send_synack(struct soc skb = tcp_make_synack(sk, dst, req); if (skb) { - struct tcphdr *th = skb->h.th; + struct tcphdr *th = tcp_hdr(skb); th->check = tcp_v6_check(th, skb->len, &treq->loc_addr, &treq->rmt_addr, @@ -835,8 +837,8 @@ static int tcp_v6_inbound_md5_hash (stru { __u8 *hash_location = NULL; struct tcp_md5sig_key *hash_expected; - struct ipv6hdr *ip6h = skb->nh.ipv6h; - struct tcphdr *th = skb->h.th; + struct ipv6hdr *ip6h = ipv6_hdr(skb); + struct tcphdr *th = tcp_hdr(skb); int length = (th->doff << 2) - sizeof (*th); int genhash; u8 *ptr; @@ -944,10 +946,11 @@ static struct timewait_sock_ops tcp6_tim static void tcp_v6_send_check(struct sock *sk, int len, struct sk_buff *skb) { struct ipv6_pinfo *np = inet6_sk(sk); - struct tcphdr *th = skb->h.th; + struct tcphdr *th = tcp_hdr(skb); if (skb->ip_summed == CHECKSUM_PARTIAL) { th->check = ~csum_ipv6_magic(&np->saddr, &np->daddr, len, IPPROTO_TCP, 0); + skb->csum_start = skb_transport_header(skb) - skb->head; skb->csum_offset = offsetof(struct tcphdr, check); } else { th->check = csum_ipv6_magic(&np->saddr, &np->daddr, len, IPPROTO_TCP, @@ -964,12 +967,13 @@ static int tcp_v6_gso_send_check(struct if (!pskb_may_pull(skb, sizeof(*th))) return -EINVAL; - ipv6h = skb->nh.ipv6h; - th = skb->h.th; + ipv6h = ipv6_hdr(skb); + th = tcp_hdr(skb); th->check = 0; th->check = ~csum_ipv6_magic(&ipv6h->saddr, &ipv6h->daddr, skb->len, IPPROTO_TCP, 0); + skb->csum_start = skb_transport_header(skb) - skb->head; skb->csum_offset = offsetof(struct tcphdr, check); skb->ip_summed = CHECKSUM_PARTIAL; return 0; @@ -977,7 +981,7 @@ static int tcp_v6_gso_send_check(struct static void tcp_v6_send_reset(struct sock *sk, struct sk_buff *skb) { - struct tcphdr *th = skb->h.th, *t1; + struct tcphdr *th = tcp_hdr(skb), *t1; struct sk_buff *buff; struct flowi fl; int tot_len = sizeof(*th); @@ -993,7 +997,7 @@ static void tcp_v6_send_reset(struct soc #ifdef CONFIG_TCP_MD5SIG if (sk) - key = tcp_v6_md5_do_lookup(sk, &skb->nh.ipv6h->daddr); + key = tcp_v6_md5_do_lookup(sk, &ipv6_hdr(skb)->daddr); else key = NULL; @@ -1037,20 +1041,18 @@ static void tcp_v6_send_reset(struct soc (TCPOPT_NOP << 16) | (TCPOPT_MD5SIG << 8) | TCPOLEN_MD5SIG); - tcp_v6_do_calc_md5_hash((__u8*)&opt[1], - key, - &skb->nh.ipv6h->daddr, - &skb->nh.ipv6h->saddr, - t1, IPPROTO_TCP, - tot_len); + tcp_v6_do_calc_md5_hash((__u8 *)&opt[1], key, + &ipv6_hdr(skb)->daddr, + &ipv6_hdr(skb)->saddr, + t1, IPPROTO_TCP, tot_len); } #endif buff->csum = csum_partial((char *)t1, sizeof(*t1), 0); memset(&fl, 0, sizeof(fl)); - ipv6_addr_copy(&fl.fl6_dst, &skb->nh.ipv6h->saddr); - ipv6_addr_copy(&fl.fl6_src, &skb->nh.ipv6h->daddr); + ipv6_addr_copy(&fl.fl6_dst, &ipv6_hdr(skb)->saddr); + ipv6_addr_copy(&fl.fl6_src, &ipv6_hdr(skb)->daddr); t1->check = csum_ipv6_magic(&fl.fl6_src, &fl.fl6_dst, sizeof(*t1), IPPROTO_TCP, @@ -1079,7 +1081,7 @@ static void tcp_v6_send_reset(struct soc static void tcp_v6_send_ack(struct tcp_timewait_sock *tw, struct sk_buff *skb, u32 seq, u32 ack, u32 win, u32 ts) { - struct tcphdr *th = skb->h.th, *t1; + struct tcphdr *th = tcp_hdr(skb), *t1; struct sk_buff *buff; struct flowi fl; int tot_len = sizeof(struct tcphdr); @@ -1091,7 +1093,7 @@ static void tcp_v6_send_ack(struct tcp_t #ifdef CONFIG_TCP_MD5SIG if (!tw && skb->sk) { - key = tcp_v6_md5_do_lookup(skb->sk, &skb->nh.ipv6h->daddr); + key = tcp_v6_md5_do_lookup(skb->sk, &ipv6_hdr(skb)->daddr); } else if (tw && tw->tw_md5_keylen) { tw_key.key = tw->tw_md5_key; tw_key.keylen = tw->tw_md5_keylen; @@ -1140,20 +1142,18 @@ static void tcp_v6_send_ack(struct tcp_t if (key) { *topt++ = htonl((TCPOPT_NOP << 24) | (TCPOPT_NOP << 16) | (TCPOPT_MD5SIG << 8) | TCPOLEN_MD5SIG); - tcp_v6_do_calc_md5_hash((__u8 *)topt, - key, - &skb->nh.ipv6h->daddr, - &skb->nh.ipv6h->saddr, - t1, IPPROTO_TCP, - tot_len); + tcp_v6_do_calc_md5_hash((__u8 *)topt, key, + &ipv6_hdr(skb)->daddr, + &ipv6_hdr(skb)->saddr, + t1, IPPROTO_TCP, tot_len); } #endif buff->csum = csum_partial((char *)t1, tot_len, 0); memset(&fl, 0, sizeof(fl)); - ipv6_addr_copy(&fl.fl6_dst, &skb->nh.ipv6h->saddr); - ipv6_addr_copy(&fl.fl6_src, &skb->nh.ipv6h->daddr); + ipv6_addr_copy(&fl.fl6_dst, &ipv6_hdr(skb)->saddr); + ipv6_addr_copy(&fl.fl6_src, &ipv6_hdr(skb)->daddr); t1->check = csum_ipv6_magic(&fl.fl6_src, &fl.fl6_dst, tot_len, IPPROTO_TCP, @@ -1197,18 +1197,18 @@ static void tcp_v6_reqsk_send_ack(struct static struct sock *tcp_v6_hnd_req(struct sock *sk,struct sk_buff *skb) { struct request_sock *req, **prev; - const struct tcphdr *th = skb->h.th; + const struct tcphdr *th = tcp_hdr(skb); struct sock *nsk; /* Find possible connection requests. */ req = inet6_csk_search_req(sk, &prev, th->source, - &skb->nh.ipv6h->saddr, - &skb->nh.ipv6h->daddr, inet6_iif(skb)); + &ipv6_hdr(skb)->saddr, + &ipv6_hdr(skb)->daddr, inet6_iif(skb)); if (req) return tcp_check_req(sk, skb, req, prev); - nsk = __inet6_lookup_established(&tcp_hashinfo, &skb->nh.ipv6h->saddr, - th->source, &skb->nh.ipv6h->daddr, + nsk = __inet6_lookup_established(&tcp_hashinfo, &ipv6_hdr(skb)->saddr, + th->source, &ipv6_hdr(skb)->daddr, ntohs(th->dest), inet6_iif(skb)); if (nsk) { @@ -1275,9 +1275,9 @@ static int tcp_v6_conn_request(struct so tcp_openreq_init(req, &tmp_opt, skb); treq = inet6_rsk(req); - ipv6_addr_copy(&treq->rmt_addr, &skb->nh.ipv6h->saddr); - ipv6_addr_copy(&treq->loc_addr, &skb->nh.ipv6h->daddr); - TCP_ECN_create_request(req, skb->h.th); + ipv6_addr_copy(&treq->rmt_addr, &ipv6_hdr(skb)->saddr); + ipv6_addr_copy(&treq->loc_addr, &ipv6_hdr(skb)->daddr); + TCP_ECN_create_request(req, tcp_hdr(skb)); treq->pktopts = NULL; if (ipv6_opt_accepted(sk, skb) || np->rxopt.bits.rxinfo || np->rxopt.bits.rxoinfo || @@ -1363,7 +1363,7 @@ static struct sock * tcp_v6_syn_recv_soc newnp->pktoptions = NULL; newnp->opt = NULL; newnp->mcast_oif = inet6_iif(skb); - newnp->mcast_hops = skb->nh.ipv6h->hop_limit; + newnp->mcast_hops = ipv6_hdr(skb)->hop_limit; /* * No need to charge this sock to the relevant IPv6 refcnt debug socks count @@ -1389,7 +1389,9 @@ static struct sock * tcp_v6_syn_recv_soc opt == NULL && treq->pktopts) { struct inet6_skb_parm *rxopt = IP6CB(treq->pktopts); if (rxopt->srcrt) - opt = ipv6_invert_rthdr(sk, (struct ipv6_rt_hdr *)(treq->pktopts->nh.raw + rxopt->srcrt)); + opt = ipv6_invert_rthdr(sk, + (struct ipv6_rt_hdr *)(skb_network_header(treq->pktopts) + + rxopt->srcrt)); } if (dst == NULL) { @@ -1469,7 +1471,7 @@ static struct sock * tcp_v6_syn_recv_soc } newnp->opt = NULL; newnp->mcast_oif = inet6_iif(skb); - newnp->mcast_hops = skb->nh.ipv6h->hop_limit; + newnp->mcast_hops = ipv6_hdr(skb)->hop_limit; /* Clone native IPv6 options from listening socket (if any) @@ -1528,15 +1530,16 @@ out: static __sum16 tcp_v6_checksum_init(struct sk_buff *skb) { if (skb->ip_summed == CHECKSUM_COMPLETE) { - if (!tcp_v6_check(skb->h.th,skb->len,&skb->nh.ipv6h->saddr, - &skb->nh.ipv6h->daddr,skb->csum)) { + if (!tcp_v6_check(tcp_hdr(skb), skb->len, &ipv6_hdr(skb)->saddr, + &ipv6_hdr(skb)->daddr, skb->csum)) { skb->ip_summed = CHECKSUM_UNNECESSARY; return 0; } } - skb->csum = ~csum_unfold(tcp_v6_check(skb->h.th,skb->len,&skb->nh.ipv6h->saddr, - &skb->nh.ipv6h->daddr, 0)); + skb->csum = ~csum_unfold(tcp_v6_check(tcp_hdr(skb), skb->len, + &ipv6_hdr(skb)->saddr, + &ipv6_hdr(skb)->daddr, 0)); if (skb->len <= 76) { return __skb_checksum_complete(skb); @@ -1600,7 +1603,7 @@ static int tcp_v6_do_rcv(struct sock *sk if (sk->sk_state == TCP_ESTABLISHED) { /* Fast path */ TCP_CHECK_TIMER(sk); - if (tcp_rcv_established(sk, skb, skb->h.th, skb->len)) + if (tcp_rcv_established(sk, skb, tcp_hdr(skb), skb->len)) goto reset; TCP_CHECK_TIMER(sk); if (opt_skb) @@ -1608,7 +1611,7 @@ static int tcp_v6_do_rcv(struct sock *sk return 0; } - if (skb->len < (skb->h.th->doff<<2) || tcp_checksum_complete(skb)) + if (skb->len < tcp_hdrlen(skb) || tcp_checksum_complete(skb)) goto csum_err; if (sk->sk_state == TCP_LISTEN) { @@ -1631,7 +1634,7 @@ static int tcp_v6_do_rcv(struct sock *sk } TCP_CHECK_TIMER(sk); - if (tcp_rcv_state_process(sk, skb, skb->h.th, skb->len)) + if (tcp_rcv_state_process(sk, skb, tcp_hdr(skb), skb->len)) goto reset; TCP_CHECK_TIMER(sk); if (opt_skb) @@ -1664,7 +1667,7 @@ ipv6_pktoptions: if (np->rxopt.bits.rxinfo || np->rxopt.bits.rxoinfo) np->mcast_oif = inet6_iif(opt_skb); if (np->rxopt.bits.rxhlim || np->rxopt.bits.rxohlim) - np->mcast_hops = opt_skb->nh.ipv6h->hop_limit; + np->mcast_hops = ipv6_hdr(opt_skb)->hop_limit; if (ipv6_opt_accepted(sk, opt_skb)) { skb_set_owner_r(opt_skb, sk); opt_skb = xchg(&np->pktoptions, opt_skb); @@ -1697,28 +1700,27 @@ static int tcp_v6_rcv(struct sk_buff **p if (!pskb_may_pull(skb, sizeof(struct tcphdr))) goto discard_it; - th = skb->h.th; + th = tcp_hdr(skb); if (th->doff < sizeof(struct tcphdr)/4) goto bad_packet; if (!pskb_may_pull(skb, th->doff*4)) goto discard_it; - if ((skb->ip_summed != CHECKSUM_UNNECESSARY && - tcp_v6_checksum_init(skb))) + if (!skb_csum_unnecessary(skb) && tcp_v6_checksum_init(skb)) goto bad_packet; - th = skb->h.th; + th = tcp_hdr(skb); TCP_SKB_CB(skb)->seq = ntohl(th->seq); TCP_SKB_CB(skb)->end_seq = (TCP_SKB_CB(skb)->seq + th->syn + th->fin + skb->len - th->doff*4); TCP_SKB_CB(skb)->ack_seq = ntohl(th->ack_seq); TCP_SKB_CB(skb)->when = 0; - TCP_SKB_CB(skb)->flags = ipv6_get_dsfield(skb->nh.ipv6h); + TCP_SKB_CB(skb)->flags = ipv6_get_dsfield(ipv6_hdr(skb)); TCP_SKB_CB(skb)->sacked = 0; - sk = __inet6_lookup(&tcp_hashinfo, &skb->nh.ipv6h->saddr, th->source, - &skb->nh.ipv6h->daddr, ntohs(th->dest), + sk = __inet6_lookup(&tcp_hashinfo, &ipv6_hdr(skb)->saddr, th->source, + &ipv6_hdr(skb)->daddr, ntohs(th->dest), inet6_iif(skb)); if (!sk) @@ -1798,7 +1800,7 @@ do_time_wait: struct sock *sk2; sk2 = inet6_lookup_listener(&tcp_hashinfo, - &skb->nh.ipv6h->daddr, + &ipv6_hdr(skb)->daddr, ntohs(th->dest), inet6_iif(skb)); if (sk2 != NULL) { struct inet_timewait_sock *tw = inet_twsk(sk); @@ -1945,6 +1947,7 @@ static int tcp_v6_destroy_sock(struct so return inet6_destroy_sock(sk); } +#ifdef CONFIG_PROC_FS /* Proc filesystem TCPv6 sock list dumping. */ static void get_openreq6(struct seq_file *seq, struct sock *sk, struct request_sock *req, int i, int uid) @@ -2061,7 +2064,6 @@ static void get_timewait6_sock(struct se atomic_read(&tw->tw_refcnt), tw); } -#ifdef CONFIG_PROC_FS static int tcp6_seq_show(struct seq_file *seq, void *v) { struct tcp_iter_state *st; diff -puN net/ipv6/udp.c~git-net net/ipv6/udp.c --- a/net/ipv6/udp.c~git-net +++ a/net/ipv6/udp.c @@ -93,10 +93,10 @@ static struct sock *__udp6_lib_lookup(st continue; score++; } - if(score == 4) { + if (score == 4) { result = sk; break; - } else if(score > badness) { + } else if (score > badness) { result = sk; badness = score; } @@ -120,8 +120,9 @@ int udpv6_recvmsg(struct kiocb *iocb, st struct ipv6_pinfo *np = inet6_sk(sk); struct inet_sock *inet = inet_sk(sk); struct sk_buff *skb; - size_t copied; - int err, copy_only, is_udplite = IS_UDPLITE(sk); + unsigned int ulen, copied; + int err; + int is_udplite = IS_UDPLITE(sk); if (addr_len) *addr_len=sizeof(struct sockaddr_in6); @@ -134,24 +135,25 @@ try_again: if (!skb) goto out; - copied = skb->len - sizeof(struct udphdr); - if (copied > len) { - copied = len; + ulen = skb->len - sizeof(struct udphdr); + copied = len; + if (copied > ulen) + copied = ulen; + else if (copied < ulen) msg->msg_flags |= MSG_TRUNC; - } /* - * Decide whether to checksum and/or copy data. + * If checksum is needed at all, try to do it while copying the + * data. If the data is truncated, or if we only want a partial + * coverage checksum (UDP-Lite), do it before the copy. */ - copy_only = (skb->ip_summed==CHECKSUM_UNNECESSARY); - if (is_udplite || (!copy_only && msg->msg_flags&MSG_TRUNC)) { - if (__udp_lib_checksum_complete(skb)) + if (copied < ulen || UDP_SKB_CB(skb)->partial_cov) { + if (udp_lib_checksum_complete(skb)) goto csum_copy_err; - copy_only = 1; } - if (copy_only) + if (skb_csum_unnecessary(skb)) err = skb_copy_datagram_iovec(skb, sizeof(struct udphdr), msg->msg_iov, copied ); else { @@ -170,15 +172,16 @@ try_again: sin6 = (struct sockaddr_in6 *) msg->msg_name; sin6->sin6_family = AF_INET6; - sin6->sin6_port = skb->h.uh->source; + sin6->sin6_port = udp_hdr(skb)->source; sin6->sin6_flowinfo = 0; sin6->sin6_scope_id = 0; if (skb->protocol == htons(ETH_P_IP)) ipv6_addr_set(&sin6->sin6_addr, 0, 0, - htonl(0xffff), skb->nh.iph->saddr); + htonl(0xffff), ip_hdr(skb)->saddr); else { - ipv6_addr_copy(&sin6->sin6_addr, &skb->nh.ipv6h->saddr); + ipv6_addr_copy(&sin6->sin6_addr, + &ipv6_hdr(skb)->saddr); if (ipv6_addr_type(&sin6->sin6_addr) & IPV6_ADDR_LINKLOCAL) sin6->sin6_scope_id = IP6CB(skb)->iif; } @@ -194,7 +197,7 @@ try_again: err = copied; if (flags & MSG_TRUNC) - err = skb->len - sizeof(struct udphdr); + err = ulen; out_free: skb_free_datagram(sk, skb); @@ -279,8 +282,10 @@ int udpv6_queue_rcv_skb(struct sock * sk } } - if (udp_lib_checksum_complete(skb)) - goto drop; + if (sk->sk_filter) { + if (udp_lib_checksum_complete(skb)) + goto drop; + } if ((rc = sock_queue_rcv_skb(sk,skb)) < 0) { /* Note that an ENOMEM error is charged twice */ @@ -325,7 +330,7 @@ static struct sock *udp_v6_mcast_next(st if (!ipv6_addr_equal(&np->rcv_saddr, loc_addr)) continue; } - if(!inet6_mc_check(s, loc_addr, rmt_addr)) + if (!inet6_mc_check(s, loc_addr, rmt_addr)) continue; return s; } @@ -341,7 +346,7 @@ static int __udp6_lib_mcast_deliver(stru struct in6_addr *daddr, struct hlist_head udptable[]) { struct sock *sk, *sk2; - const struct udphdr *uh = skb->h.uh; + const struct udphdr *uh = udp_hdr(skb); int dif; read_lock(&udp_hash_lock); @@ -366,9 +371,20 @@ out: return 0; } -static inline int udp6_csum_init(struct sk_buff *skb, struct udphdr *uh) - +static inline int udp6_csum_init(struct sk_buff *skb, struct udphdr *uh, + int proto) { + int err; + + UDP_SKB_CB(skb)->partial_cov = 0; + UDP_SKB_CB(skb)->cscov = skb->len; + + if (proto == IPPROTO_UDPLITE) { + err = udplite_checksum_init(skb, uh); + if (err) + return err; + } + if (uh->check == 0) { /* RFC 2460 section 8.1 says that we SHOULD log this error. Well, it is reasonable. @@ -377,21 +393,20 @@ static inline int udp6_csum_init(struct return 1; } if (skb->ip_summed == CHECKSUM_COMPLETE && - !csum_ipv6_magic(&skb->nh.ipv6h->saddr, &skb->nh.ipv6h->daddr, - skb->len, IPPROTO_UDP, skb->csum )) + !csum_ipv6_magic(&ipv6_hdr(skb)->saddr, &ipv6_hdr(skb)->daddr, + skb->len, proto, skb->csum)) skb->ip_summed = CHECKSUM_UNNECESSARY; - if (skb->ip_summed != CHECKSUM_UNNECESSARY) - skb->csum = ~csum_unfold(csum_ipv6_magic(&skb->nh.ipv6h->saddr, - &skb->nh.ipv6h->daddr, - skb->len, IPPROTO_UDP, - 0)); + if (!skb_csum_unnecessary(skb)) + skb->csum = ~csum_unfold(csum_ipv6_magic(&ipv6_hdr(skb)->saddr, + &ipv6_hdr(skb)->daddr, + skb->len, proto, 0)); - return (UDP_SKB_CB(skb)->partial_cov = 0); + return 0; } int __udp6_lib_rcv(struct sk_buff **pskb, struct hlist_head udptable[], - int is_udplite) + int proto) { struct sk_buff *skb = *pskb; struct sock *sk; @@ -403,15 +418,16 @@ int __udp6_lib_rcv(struct sk_buff **pskb if (!pskb_may_pull(skb, sizeof(struct udphdr))) goto short_packet; - saddr = &skb->nh.ipv6h->saddr; - daddr = &skb->nh.ipv6h->daddr; - uh = skb->h.uh; + saddr = &ipv6_hdr(skb)->saddr; + daddr = &ipv6_hdr(skb)->daddr; + uh = udp_hdr(skb); ulen = ntohs(uh->len); if (ulen > skb->len) goto short_packet; - if(! is_udplite ) { /* UDP validates ulen. */ + if (proto == IPPROTO_UDP) { + /* UDP validates ulen. */ /* Check for jumbo payload */ if (ulen == 0) @@ -423,19 +439,15 @@ int __udp6_lib_rcv(struct sk_buff **pskb if (ulen < skb->len) { if (pskb_trim_rcsum(skb, ulen)) goto short_packet; - saddr = &skb->nh.ipv6h->saddr; - daddr = &skb->nh.ipv6h->daddr; - uh = skb->h.uh; + saddr = &ipv6_hdr(skb)->saddr; + daddr = &ipv6_hdr(skb)->daddr; + uh = udp_hdr(skb); } - - if (udp6_csum_init(skb, uh)) - goto discard; - - } else { /* UDP-Lite validates cscov. */ - if (udplite6_csum_init(skb, uh)) - goto discard; } + if (udp6_csum_init(skb, uh, proto)) + goto discard; + /* * Multicast receive code */ @@ -457,33 +469,34 @@ int __udp6_lib_rcv(struct sk_buff **pskb if (udp_lib_checksum_complete(skb)) goto discard; - UDP6_INC_STATS_BH(UDP_MIB_NOPORTS, is_udplite); + UDP6_INC_STATS_BH(UDP_MIB_NOPORTS, proto == IPPROTO_UDPLITE); icmpv6_send(skb, ICMPV6_DEST_UNREACH, ICMPV6_PORT_UNREACH, 0, dev); kfree_skb(skb); - return(0); + return 0; } /* deliver */ udpv6_queue_rcv_skb(sk, skb); sock_put(sk); - return(0); + return 0; short_packet: LIMIT_NETDEBUG(KERN_DEBUG "UDP%sv6: short packet: %d/%u\n", - is_udplite? "-Lite" : "", ulen, skb->len); + proto == IPPROTO_UDPLITE ? "-Lite" : "", + ulen, skb->len); discard: - UDP6_INC_STATS_BH(UDP_MIB_INERRORS, is_udplite); + UDP6_INC_STATS_BH(UDP_MIB_INERRORS, proto == IPPROTO_UDPLITE); kfree_skb(skb); - return(0); + return 0; } static __inline__ int udpv6_rcv(struct sk_buff **pskb) { - return __udp6_lib_rcv(pskb, udp_hash, 0); + return __udp6_lib_rcv(pskb, udp_hash, IPPROTO_UDP); } /* @@ -521,7 +534,7 @@ static int udp_v6_push_pending_frames(st /* * Create a UDP header */ - uh = skb->h.uh; + uh = udp_hdr(skb); uh->source = fl->fl_ip_sport; uh->dest = fl->fl_ip_dport; uh->len = htons(up->len); diff -puN net/ipv6/udplite.c~git-net net/ipv6/udplite.c --- a/net/ipv6/udplite.c~git-net +++ a/net/ipv6/udplite.c @@ -19,7 +19,7 @@ DEFINE_SNMP_STAT(struct udp_mib, udplite static int udplitev6_rcv(struct sk_buff **pskb) { - return __udp6_lib_rcv(pskb, udplite_hash, 1); + return __udp6_lib_rcv(pskb, udplite_hash, IPPROTO_UDPLITE); } static void udplitev6_err(struct sk_buff *skb, diff -puN net/ipv6/xfrm6_input.c~git-net net/ipv6/xfrm6_input.c --- a/net/ipv6/xfrm6_input.c~git-net +++ a/net/ipv6/xfrm6_input.c @@ -28,14 +28,14 @@ int xfrm6_rcv_spi(struct sk_buff *skb, _ unsigned int nhoff; nhoff = IP6CB(skb)->nhoff; - nexthdr = skb->nh.raw[nhoff]; + nexthdr = skb_network_header(skb)[nhoff]; seq = 0; if (!spi && (err = xfrm_parse_spi(skb, nexthdr, &spi, &seq)) != 0) goto drop; do { - struct ipv6hdr *iph = skb->nh.ipv6h; + struct ipv6hdr *iph = ipv6_hdr(skb); if (xfrm_nr == XFRM_MAX_DEPTH) goto drop; @@ -58,7 +58,7 @@ int xfrm6_rcv_spi(struct sk_buff *skb, _ if (nexthdr <= 0) goto drop_unlock; - skb->nh.raw[nhoff] = nexthdr; + skb_network_header(skb)[nhoff] = nexthdr; if (x->props.replay_window) xfrm_replay_advance(x, seq); @@ -112,8 +112,8 @@ int xfrm6_rcv_spi(struct sk_buff *skb, _ return -1; } else { #ifdef CONFIG_NETFILTER - skb->nh.ipv6h->payload_len = htons(skb->len); - __skb_push(skb, skb->data - skb->nh.raw); + ipv6_hdr(skb)->payload_len = htons(skb->len); + __skb_push(skb, skb->data - skb_network_header(skb)); NF_HOOK(PF_INET6, NF_IP6_PRE_ROUTING, skb, skb->dev, NULL, ip6_rcv_finish); @@ -140,6 +140,8 @@ int xfrm6_rcv(struct sk_buff **pskb) return xfrm6_rcv_spi(*pskb, 0); } +EXPORT_SYMBOL(xfrm6_rcv); + int xfrm6_input_addr(struct sk_buff *skb, xfrm_address_t *daddr, xfrm_address_t *saddr, u8 proto) { @@ -247,3 +249,5 @@ drop: xfrm_state_put(xfrm_vec_one); return -1; } + +EXPORT_SYMBOL(xfrm6_input_addr); diff -puN net/ipv6/xfrm6_mode_beet.c~git-net net/ipv6/xfrm6_mode_beet.c --- a/net/ipv6/xfrm6_mode_beet.c~git-net +++ a/net/ipv6/xfrm6_mode_beet.c @@ -38,17 +38,18 @@ static int xfrm6_beet_output(struct xfrm int hdr_len; skb_push(skb, x->props.header_len); - iph = skb->nh.ipv6h; + iph = ipv6_hdr(skb); hdr_len = ip6_find_1stfragopt(skb, &prevhdr); - skb->nh.raw = prevhdr - x->props.header_len; - skb->h.raw = skb->data + hdr_len; + skb_set_network_header(skb, + (prevhdr - x->props.header_len) - skb->data); + skb_set_transport_header(skb, hdr_len); memmove(skb->data, iph, hdr_len); - skb->nh.raw = skb->data; - top_iph = skb->nh.ipv6h; - skb->nh.raw = &top_iph->nexthdr; - skb->h.ipv6h = top_iph + 1; + skb_reset_network_header(skb); + top_iph = ipv6_hdr(skb); + skb->transport_header = skb->network_header + sizeof(struct ipv6hdr); + skb->network_header += offsetof(struct ipv6hdr, nexthdr); ipv6_addr_copy(&top_iph->saddr, (struct in6_addr *)&x->props.saddr); ipv6_addr_copy(&top_iph->daddr, (struct in6_addr *)&x->id.daddr); @@ -59,6 +60,7 @@ static int xfrm6_beet_output(struct xfrm static int xfrm6_beet_input(struct xfrm_state *x, struct sk_buff *skb) { struct ipv6hdr *ip6h; + const unsigned char *old_mac; int size = sizeof(struct ipv6hdr); int err = -EINVAL; @@ -66,13 +68,14 @@ static int xfrm6_beet_input(struct xfrm_ goto out; skb_push(skb, size); - memmove(skb->data, skb->nh.raw, size); - skb->nh.raw = skb->data; + memmove(skb->data, skb_network_header(skb), size); + skb_reset_network_header(skb); - skb->mac.raw = memmove(skb->data - skb->mac_len, - skb->mac.raw, skb->mac_len); + old_mac = skb_mac_header(skb); + skb_set_mac_header(skb, -skb->mac_len); + memmove(skb_mac_header(skb), old_mac, skb->mac_len); - ip6h = skb->nh.ipv6h; + ip6h = ipv6_hdr(skb); ip6h->payload_len = htons(skb->len - size); ipv6_addr_copy(&ip6h->daddr, (struct in6_addr *) &x->sel.daddr.a6); ipv6_addr_copy(&ip6h->saddr, (struct in6_addr *) &x->sel.saddr.a6); diff -puN net/ipv6/xfrm6_mode_ro.c~git-net net/ipv6/xfrm6_mode_ro.c --- a/net/ipv6/xfrm6_mode_ro.c~git-net +++ a/net/ipv6/xfrm6_mode_ro.c @@ -50,11 +50,12 @@ static int xfrm6_ro_output(struct xfrm_s int hdr_len; skb_push(skb, x->props.header_len); - iph = skb->nh.ipv6h; + iph = ipv6_hdr(skb); hdr_len = x->type->hdr_offset(x, skb, &prevhdr); - skb->nh.raw = prevhdr - x->props.header_len; - skb->h.raw = skb->data + hdr_len; + skb_set_network_header(skb, + (prevhdr - x->props.header_len) - skb->data); + skb_set_transport_header(skb, hdr_len); memmove(skb->data, iph, hdr_len); return 0; } diff -puN net/ipv6/xfrm6_mode_transport.c~git-net net/ipv6/xfrm6_mode_transport.c --- a/net/ipv6/xfrm6_mode_transport.c~git-net +++ a/net/ipv6/xfrm6_mode_transport.c @@ -32,11 +32,12 @@ static int xfrm6_transport_output(struct int hdr_len; skb_push(skb, x->props.header_len); - iph = skb->nh.ipv6h; + iph = ipv6_hdr(skb); hdr_len = x->type->hdr_offset(x, skb, &prevhdr); - skb->nh.raw = prevhdr - x->props.header_len; - skb->h.raw = skb->data + hdr_len; + skb_set_network_header(skb, + (prevhdr - x->props.header_len) - skb->data); + skb_set_transport_header(skb, hdr_len); memmove(skb->data, iph, hdr_len); return 0; } @@ -51,13 +52,16 @@ static int xfrm6_transport_output(struct */ static int xfrm6_transport_input(struct xfrm_state *x, struct sk_buff *skb) { - int ihl = skb->data - skb->h.raw; + int ihl = skb->data - skb_transport_header(skb); - if (skb->h.raw != skb->nh.raw) - skb->nh.raw = memmove(skb->h.raw, skb->nh.raw, ihl); - skb->nh.ipv6h->payload_len = htons(skb->len + ihl - + if (skb->transport_header != skb->network_header) { + memmove(skb_transport_header(skb), + skb_network_header(skb), ihl); + skb->network_header = skb->transport_header; + } + ipv6_hdr(skb)->payload_len = htons(skb->len + ihl - sizeof(struct ipv6hdr)); - skb->h.raw = skb->data; + skb_reset_transport_header(skb); return 0; } diff -puN net/ipv6/xfrm6_mode_tunnel.c~git-net net/ipv6/xfrm6_mode_tunnel.c --- a/net/ipv6/xfrm6_mode_tunnel.c~git-net +++ a/net/ipv6/xfrm6_mode_tunnel.c @@ -18,8 +18,8 @@ static inline void ipip6_ecn_decapsulate(struct sk_buff *skb) { - struct ipv6hdr *outer_iph = skb->nh.ipv6h; - struct ipv6hdr *inner_iph = skb->h.ipv6h; + struct ipv6hdr *outer_iph = ipv6_hdr(skb); + struct ipv6hdr *inner_iph = ipipv6_hdr(skb); if (INET_ECN_is_ce(ipv6_get_dsfield(outer_iph))) IP6_ECN_set_ce(inner_iph); @@ -27,8 +27,8 @@ static inline void ipip6_ecn_decapsulate static inline void ip6ip_ecn_decapsulate(struct sk_buff *skb) { - if (INET_ECN_is_ce(ipv6_get_dsfield(skb->nh.ipv6h))) - IP_ECN_set_ce(skb->h.ipiph); + if (INET_ECN_is_ce(ipv6_get_dsfield(ipv6_hdr(skb)))) + IP_ECN_set_ce(ipip_hdr(skb)); } /* Add encapsulation header. @@ -51,12 +51,12 @@ static int xfrm6_tunnel_output(struct xf int dsfield; skb_push(skb, x->props.header_len); - iph = skb->nh.ipv6h; + iph = ipv6_hdr(skb); - skb->nh.raw = skb->data; - top_iph = skb->nh.ipv6h; - skb->nh.raw = &top_iph->nexthdr; - skb->h.ipv6h = top_iph + 1; + skb_reset_network_header(skb); + top_iph = ipv6_hdr(skb); + skb->transport_header = skb->network_header + sizeof(struct ipv6hdr); + skb->network_header += offsetof(struct ipv6hdr, nexthdr); top_iph->version = 6; if (xdst->route->ops->family == AF_INET6) { @@ -86,9 +86,11 @@ static int xfrm6_tunnel_output(struct xf static int xfrm6_tunnel_input(struct xfrm_state *x, struct sk_buff *skb) { int err = -EINVAL; + const unsigned char *old_mac; + const unsigned char *nh = skb_network_header(skb); - if (skb->nh.raw[IP6CB(skb)->nhoff] != IPPROTO_IPV6 - && skb->nh.raw[IP6CB(skb)->nhoff] != IPPROTO_IPIP) + if (nh[IP6CB(skb)->nhoff] != IPPROTO_IPV6 && + nh[IP6CB(skb)->nhoff] != IPPROTO_IPIP) goto out; if (!pskb_may_pull(skb, sizeof(struct ipv6hdr))) goto out; @@ -97,9 +99,10 @@ static int xfrm6_tunnel_input(struct xfr (err = pskb_expand_head(skb, 0, 0, GFP_ATOMIC))) goto out; - if (skb->nh.raw[IP6CB(skb)->nhoff] == IPPROTO_IPV6) { + nh = skb_network_header(skb); + if (nh[IP6CB(skb)->nhoff] == IPPROTO_IPV6) { if (x->props.flags & XFRM_STATE_DECAP_DSCP) - ipv6_copy_dscp(skb->nh.ipv6h, skb->h.ipv6h); + ipv6_copy_dscp(ipv6_hdr(skb), ipipv6_hdr(skb)); if (!(x->props.flags & XFRM_STATE_NOECN)) ipip6_ecn_decapsulate(skb); } else { @@ -107,9 +110,10 @@ static int xfrm6_tunnel_input(struct xfr ip6ip_ecn_decapsulate(skb); skb->protocol = htons(ETH_P_IP); } - skb->mac.raw = memmove(skb->data - skb->mac_len, - skb->mac.raw, skb->mac_len); - skb->nh.raw = skb->data; + old_mac = skb_mac_header(skb); + skb_set_mac_header(skb, -skb->mac_len); + memmove(skb_mac_header(skb), old_mac, skb->mac_len); + skb_reset_network_header(skb); err = 0; out: diff -puN net/ipv6/xfrm6_output.c~git-net net/ipv6/xfrm6_output.c --- a/net/ipv6/xfrm6_output.c~git-net +++ a/net/ipv6/xfrm6_output.c @@ -23,6 +23,8 @@ int xfrm6_find_1stfragopt(struct xfrm_st return ip6_find_1stfragopt(skb, prevhdr); } +EXPORT_SYMBOL(xfrm6_find_1stfragopt); + static int xfrm6_tunnel_check_size(struct sk_buff *skb) { int mtu, ret = 0; @@ -76,11 +78,11 @@ static int xfrm6_output_one(struct sk_bu x->curlft.bytes += skb->len; x->curlft.packets++; if (x->props.mode == XFRM_MODE_ROUTEOPTIMIZATION) - x->lastused = (u64)xtime.tv_sec; + x->lastused = get_seconds(); spin_unlock_bh(&x->lock); - skb->nh.raw = skb->data; + skb_reset_network_header(skb); if (!(skb->dst = dst_pop(dst))) { err = -EHOSTUNREACH; diff -puN net/ipv6/xfrm6_policy.c~git-net net/ipv6/xfrm6_policy.c --- a/net/ipv6/xfrm6_policy.c~git-net +++ a/net/ipv6/xfrm6_policy.c @@ -240,7 +240,8 @@ __xfrm6_bundle_create(struct xfrm_policy if (!afinfo) { dst = *dst_p; goto error; - }; + } + dst_prev->output = afinfo->output; xfrm_state_put_afinfo(afinfo); /* Sheit... I remember I did this right. Apparently, @@ -270,17 +271,19 @@ error: static inline void _decode_session6(struct sk_buff *skb, struct flowi *fl) { - u16 offset = skb->h.raw - skb->nh.raw; - struct ipv6hdr *hdr = skb->nh.ipv6h; + u16 offset = skb_network_header_len(skb); + struct ipv6hdr *hdr = ipv6_hdr(skb); struct ipv6_opt_hdr *exthdr; - u8 nexthdr = skb->nh.raw[IP6CB(skb)->nhoff]; + const unsigned char *nh = skb_network_header(skb); + u8 nexthdr = nh[IP6CB(skb)->nhoff]; memset(fl, 0, sizeof(struct flowi)); ipv6_addr_copy(&fl->fl6_dst, &hdr->daddr); ipv6_addr_copy(&fl->fl6_src, &hdr->saddr); - while (pskb_may_pull(skb, skb->nh.raw + offset + 1 - skb->data)) { - exthdr = (struct ipv6_opt_hdr*)(skb->nh.raw + offset); + while (pskb_may_pull(skb, nh + offset + 1 - skb->data)) { + nh = skb_network_header(skb); + exthdr = (struct ipv6_opt_hdr *)(nh + offset); switch (nexthdr) { case NEXTHDR_ROUTING: @@ -288,7 +291,7 @@ _decode_session6(struct sk_buff *skb, st case NEXTHDR_DEST: offset += ipv6_optlen(exthdr); nexthdr = exthdr->nexthdr; - exthdr = (struct ipv6_opt_hdr*)(skb->nh.raw + offset); + exthdr = (struct ipv6_opt_hdr *)(nh + offset); break; case IPPROTO_UDP: @@ -296,7 +299,7 @@ _decode_session6(struct sk_buff *skb, st case IPPROTO_TCP: case IPPROTO_SCTP: case IPPROTO_DCCP: - if (pskb_may_pull(skb, skb->nh.raw + offset + 4 - skb->data)) { + if (pskb_may_pull(skb, nh + offset + 4 - skb->data)) { __be16 *ports = (__be16 *)exthdr; fl->fl_ip_sport = ports[0]; @@ -306,7 +309,7 @@ _decode_session6(struct sk_buff *skb, st return; case IPPROTO_ICMPV6: - if (pskb_may_pull(skb, skb->nh.raw + offset + 2 - skb->data)) { + if (pskb_may_pull(skb, nh + offset + 2 - skb->data)) { u8 *icmp = (u8 *)exthdr; fl->fl_icmp_type = icmp[0]; @@ -317,7 +320,7 @@ _decode_session6(struct sk_buff *skb, st #ifdef CONFIG_IPV6_MIP6 case IPPROTO_MH: - if (pskb_may_pull(skb, skb->nh.raw + offset + 3 - skb->data)) { + if (pskb_may_pull(skb, nh + offset + 3 - skb->data)) { struct ip6_mh *mh; mh = (struct ip6_mh *)exthdr; @@ -335,7 +338,7 @@ _decode_session6(struct sk_buff *skb, st fl->fl_ipsec_spi = 0; fl->proto = nexthdr; return; - }; + } } } diff -puN net/ipv6/xfrm6_tunnel.c~git-net net/ipv6/xfrm6_tunnel.c --- a/net/ipv6/xfrm6_tunnel.c~git-net +++ a/net/ipv6/xfrm6_tunnel.c @@ -257,7 +257,7 @@ static int xfrm6_tunnel_input(struct xfr static int xfrm6_tunnel_rcv(struct sk_buff *skb) { - struct ipv6hdr *iph = skb->nh.ipv6h; + struct ipv6hdr *iph = ipv6_hdr(skb); __be32 spi; spi = xfrm6_tunnel_spi_lookup((xfrm_address_t *)&iph->saddr); diff -puN net/ipx/af_ipx.c~git-net net/ipx/af_ipx.c --- a/net/ipx/af_ipx.c~git-net +++ a/net/ipx/af_ipx.c @@ -576,7 +576,9 @@ static struct sk_buff *ipxitf_adjust_skb skb2 = alloc_skb(len, GFP_ATOMIC); if (skb2) { skb_reserve(skb2, out_offset); - skb2->nh.raw = skb2->h.raw = skb_put(skb2, skb->len); + skb_reset_network_header(skb2); + skb_reset_transport_header(skb2); + skb_put(skb2, skb->len); memcpy(ipx_hdr(skb2), ipx_hdr(skb), skb->len); memcpy(skb2->cb, skb->cb, sizeof(skb->cb)); } @@ -1807,8 +1809,8 @@ static int ipx_recvmsg(struct kiocb *ioc copied); if (rc) goto out_free; - if (skb->tstamp.off_sec) - skb_get_timestamp(skb, &sk->sk_stamp); + if (skb->tstamp.tv64) + sk->sk_stamp = skb->tstamp; msg->msg_namelen = sizeof(*sipx); diff -puN net/ipx/ipx_route.c~git-net net/ipx/ipx_route.c --- a/net/ipx/ipx_route.c~git-net +++ a/net/ipx/ipx_route.c @@ -203,7 +203,9 @@ int ipxrtr_route_packet(struct sock *sk, skb->sk = sk; /* Fill in IPX header */ - skb->h.raw = skb->nh.raw = skb_put(skb, sizeof(struct ipxhdr)); + skb_reset_network_header(skb); + skb_reset_transport_header(skb); + skb_put(skb, sizeof(struct ipxhdr)); ipx = ipx_hdr(skb); ipx->ipx_pktsize = htons(len + sizeof(struct ipxhdr)); IPX_SKB_CB(skb)->ipx_tctrl = 0; diff -puN net/irda/af_irda.c~git-net net/irda/af_irda.c --- a/net/irda/af_irda.c~git-net +++ a/net/irda/af_irda.c @@ -89,7 +89,6 @@ static int irda_data_indication(void *in self = instance; sk = instance; - IRDA_ASSERT(sk != NULL, return -1;); err = sock_queue_rcv_skb(sk, skb); if (err) { @@ -131,14 +130,12 @@ static void irda_disconnect_indication(v } /* Prevent race conditions with irda_release() and irda_shutdown() */ + bh_lock_sock(sk); if (!sock_flag(sk, SOCK_DEAD) && sk->sk_state != TCP_CLOSE) { - lock_sock(sk); sk->sk_state = TCP_CLOSE; - sk->sk_err = ECONNRESET; sk->sk_shutdown |= SEND_SHUTDOWN; sk->sk_state_change(sk); - release_sock(sk); /* Close our TSAP. * If we leave it open, IrLMP put it back into the list of @@ -158,6 +155,7 @@ static void irda_disconnect_indication(v self->tsap = NULL; } } + bh_unlock_sock(sk); /* Note : once we are there, there is not much you want to do * with the socket anymore, apart from closing it. @@ -220,7 +218,7 @@ static void irda_connect_confirm(void *i break; default: self->max_data_size = irttp_get_max_seg_size(self->tsap); - }; + } IRDA_DEBUG(2, "%s(), max_data_size=%d\n", __FUNCTION__, self->max_data_size); @@ -283,7 +281,7 @@ static void irda_connect_indication(void break; default: self->max_data_size = irttp_get_max_seg_size(self->tsap); - }; + } IRDA_DEBUG(2, "%s(), max_data_size=%d\n", __FUNCTION__, self->max_data_size); @@ -306,8 +304,6 @@ static void irda_connect_response(struct IRDA_DEBUG(2, "%s()\n", __FUNCTION__); - IRDA_ASSERT(self != NULL, return;); - skb = alloc_skb(TTP_MAX_HEADER + TTP_SAR_HEADER, GFP_ATOMIC); if (skb == NULL) { @@ -337,7 +333,7 @@ static void irda_flow_indication(void *i self = instance; sk = instance; - IRDA_ASSERT(sk != NULL, return;); + BUG_ON(sk == NULL); switch (flow) { case FLOW_STOP: @@ -449,7 +445,7 @@ static void irda_discovery_timeout(u_lon IRDA_DEBUG(2, "%s()\n", __FUNCTION__); self = (struct irda_sock *) priv; - IRDA_ASSERT(self != NULL, return;); + BUG_ON(self == NULL); /* Nothing for the caller */ self->cachelog = NULL; @@ -546,8 +542,6 @@ static int irda_find_lsap_sel(struct ird { IRDA_DEBUG(2, "%s(%p, %s)\n", __FUNCTION__, self, name); - IRDA_ASSERT(self != NULL, return -1;); - if (self->iriap) { IRDA_WARNING("%s(): busy with a previous query\n", __FUNCTION__); @@ -635,8 +629,6 @@ static int irda_discover_daddr_and_lsap_ IRDA_DEBUG(2, "%s(), name=%s\n", __FUNCTION__, name); - IRDA_ASSERT(self != NULL, return -1;); - /* Ask lmp for the current discovery log * Note : we have to use irlmp_get_discoveries(), as opposed * to play with the cachelog directly, because while we are @@ -784,8 +776,6 @@ static int irda_bind(struct socket *sock struct irda_sock *self = irda_sk(sk); int err; - IRDA_ASSERT(self != NULL, return -1;); - IRDA_DEBUG(2, "%s(%p)\n", __FUNCTION__, self); if (addr_len != sizeof(struct sockaddr_irda)) @@ -841,8 +831,6 @@ static int irda_accept(struct socket *so IRDA_DEBUG(2, "%s()\n", __FUNCTION__); - IRDA_ASSERT(self != NULL, return -1;); - err = irda_create(newsock, sk->sk_protocol); if (err) return err; @@ -873,44 +861,28 @@ static int irda_accept(struct socket *so * calling us, the data is waiting for us ;-) * Jean II */ - skb = skb_dequeue(&sk->sk_receive_queue); - if (skb == NULL) { - int ret = 0; - DECLARE_WAITQUEUE(waitq, current); + while (1) { + skb = skb_dequeue(&sk->sk_receive_queue); + if (skb) + break; /* Non blocking operation */ if (flags & O_NONBLOCK) return -EWOULDBLOCK; - /* The following code is a cut'n'paste of the - * wait_event_interruptible() macro. - * We don't us the macro because the condition has - * side effects : we want to make sure that only one - * skb get dequeued - Jean II */ - add_wait_queue(sk->sk_sleep, &waitq); - for (;;) { - set_current_state(TASK_INTERRUPTIBLE); - skb = skb_dequeue(&sk->sk_receive_queue); - if (skb != NULL) - break; - if (!signal_pending(current)) { - schedule(); - continue; - } - ret = -ERESTARTSYS; - break; - } - current->state = TASK_RUNNING; - remove_wait_queue(sk->sk_sleep, &waitq); - if(ret) - return -ERESTARTSYS; + err = wait_event_interruptible(*(sk->sk_sleep), + skb_peek(&sk->sk_receive_queue)); + if (err) + return err; } newsk = newsock->sk; + if (newsk == NULL) + return -EIO; + newsk->sk_state = TCP_ESTABLISHED; new = irda_sk(newsk); - IRDA_ASSERT(new != NULL, return -1;); /* Now attach up the new socket */ new->tsap = irttp_dup(self->tsap, new); @@ -1061,7 +1033,8 @@ static int irda_connect(struct socket *s if (sk->sk_state != TCP_ESTABLISHED) { sock->state = SS_UNCONNECTED; - return sock_error(sk); /* Always set at this point */ + err = sock_error(sk); + return err? err : -ECONNRESET; } sock->state = SS_CONNECTED; @@ -1171,8 +1144,6 @@ static void irda_destroy_socket(struct i { IRDA_DEBUG(2, "%s(%p)\n", __FUNCTION__, self); - IRDA_ASSERT(self != NULL, return;); - /* Unregister with IrLMP */ irlmp_unregister_client(self->ckey); irlmp_unregister_service(self->skey); @@ -1274,7 +1245,6 @@ static int irda_sendmsg(struct kiocb *io struct sock *sk = sock->sk; struct irda_sock *self; struct sk_buff *skb; - unsigned char *asmptr; int err; IRDA_DEBUG(4, "%s(), len=%zd\n", __FUNCTION__, len); @@ -1292,7 +1262,6 @@ static int irda_sendmsg(struct kiocb *io return -ENOTCONN; self = irda_sk(sk); - IRDA_ASSERT(self != NULL, return -1;); /* Check if IrTTP is wants us to slow down */ @@ -1317,9 +1286,9 @@ static int irda_sendmsg(struct kiocb *io return -ENOBUFS; skb_reserve(skb, self->max_header_size + 16); - - asmptr = skb->h.raw = skb_put(skb, len); - err = memcpy_fromiovec(asmptr, msg->msg_iov, len); + skb_reset_transport_header(skb); + skb_put(skb, len); + err = memcpy_fromiovec(skb_transport_header(skb), msg->msg_iov, len); if (err) { kfree_skb(skb); return err; @@ -1355,16 +1324,16 @@ static int irda_recvmsg_dgram(struct kio IRDA_DEBUG(4, "%s()\n", __FUNCTION__); - IRDA_ASSERT(self != NULL, return -1;); - IRDA_ASSERT(!sock_error(sk), return -1;); + if ((err = sock_error(sk)) < 0) + return err; skb = skb_recv_datagram(sk, flags & ~MSG_DONTWAIT, flags & MSG_DONTWAIT, &err); if (!skb) return err; - skb->h.raw = skb->data; - copied = skb->len; + skb_reset_transport_header(skb); + copied = skb->len; if (copied > size) { IRDA_DEBUG(2, "%s(), Received truncated frame (%zd < %zd)!\n", @@ -1403,13 +1372,13 @@ static int irda_recvmsg_stream(struct ki struct irda_sock *self = irda_sk(sk); int noblock = flags & MSG_DONTWAIT; size_t copied = 0; - int target = 1; - DECLARE_WAITQUEUE(waitq, current); + int target, err; + long timeo; IRDA_DEBUG(3, "%s()\n", __FUNCTION__); - IRDA_ASSERT(self != NULL, return -1;); - IRDA_ASSERT(!sock_error(sk), return -1;); + if ((err = sock_error(sk)) < 0) + return err; if (sock->flags & __SO_ACCEPTCON) return(-EINVAL); @@ -1417,8 +1386,8 @@ static int irda_recvmsg_stream(struct ki if (flags & MSG_OOB) return -EOPNOTSUPP; - if (flags & MSG_WAITALL) - target = size; + target = sock_rcvlowat(sk, flags & MSG_WAITALL, size); + timeo = sock_rcvtimeo(sk, noblock); msg->msg_namelen = 0; @@ -1426,19 +1395,14 @@ static int irda_recvmsg_stream(struct ki int chunk; struct sk_buff *skb = skb_dequeue(&sk->sk_receive_queue); - if (skb==NULL) { + if (skb == NULL) { + DEFINE_WAIT(wait); int ret = 0; if (copied >= target) break; - /* The following code is a cut'n'paste of the - * wait_event_interruptible() macro. - * We don't us the macro because the test condition - * is messy. - Jean II */ - set_bit(SOCK_ASYNC_WAITDATA, &sk->sk_socket->flags); - add_wait_queue(sk->sk_sleep, &waitq); - set_current_state(TASK_INTERRUPTIBLE); + prepare_to_wait_exclusive(sk->sk_sleep, &wait, TASK_INTERRUPTIBLE); /* * POSIX 1003.1g mandates this order. @@ -1451,17 +1415,17 @@ static int irda_recvmsg_stream(struct ki else if (noblock) ret = -EAGAIN; else if (signal_pending(current)) - ret = -ERESTARTSYS; + ret = sock_intr_errno(timeo); + else if (sk->sk_state != TCP_ESTABLISHED) + ret = -ENOTCONN; else if (skb_peek(&sk->sk_receive_queue) == NULL) /* Wait process until data arrives */ schedule(); - current->state = TASK_RUNNING; - remove_wait_queue(sk->sk_sleep, &waitq); - clear_bit(SOCK_ASYNC_WAITDATA, &sk->sk_socket->flags); + finish_wait(sk->sk_sleep, &wait); - if(ret) - return(ret); + if (ret) + return ret; if (sk->sk_shutdown & RCV_SHUTDOWN) break; @@ -1530,7 +1494,6 @@ static int irda_sendmsg_dgram(struct kio struct sock *sk = sock->sk; struct irda_sock *self; struct sk_buff *skb; - unsigned char *asmptr; int err; IRDA_DEBUG(4, "%s(), len=%zd\n", __FUNCTION__, len); @@ -1547,7 +1510,6 @@ static int irda_sendmsg_dgram(struct kio return -ENOTCONN; self = irda_sk(sk); - IRDA_ASSERT(self != NULL, return -1;); /* * Check that we don't send out too big frames. This is an unreliable @@ -1566,10 +1528,11 @@ static int irda_sendmsg_dgram(struct kio return -ENOBUFS; skb_reserve(skb, self->max_header_size); + skb_reset_transport_header(skb); IRDA_DEBUG(4, "%s(), appending user data\n", __FUNCTION__); - asmptr = skb->h.raw = skb_put(skb, len); - err = memcpy_fromiovec(asmptr, msg->msg_iov, len); + skb_put(skb, len); + err = memcpy_fromiovec(skb_transport_header(skb), msg->msg_iov, len); if (err) { kfree_skb(skb); return err; @@ -1602,7 +1565,6 @@ static int irda_sendmsg_ultra(struct kio __u8 pid = 0; int bound = 0; struct sk_buff *skb; - unsigned char *asmptr; int err; IRDA_DEBUG(4, "%s(), len=%zd\n", __FUNCTION__, len); @@ -1616,7 +1578,6 @@ static int irda_sendmsg_ultra(struct kio } self = irda_sk(sk); - IRDA_ASSERT(self != NULL, return -1;); /* Check if an address was specified with sendto. Jean II */ if (msg->msg_name) { @@ -1662,10 +1623,11 @@ static int irda_sendmsg_ultra(struct kio return -ENOBUFS; skb_reserve(skb, self->max_header_size); + skb_reset_transport_header(skb); IRDA_DEBUG(4, "%s(), appending user data\n", __FUNCTION__); - asmptr = skb->h.raw = skb_put(skb, len); - err = memcpy_fromiovec(asmptr, msg->msg_iov, len); + skb_put(skb, len); + err = memcpy_fromiovec(skb_transport_header(skb), msg->msg_iov, len); if (err) { kfree_skb(skb); return err; @@ -1689,8 +1651,6 @@ static int irda_shutdown(struct socket * struct sock *sk = sock->sk; struct irda_sock *self = irda_sk(sk); - IRDA_ASSERT(self != NULL, return -1;); - IRDA_DEBUG(1, "%s(%p)\n", __FUNCTION__, self); sk->sk_state = TCP_CLOSE; @@ -1863,8 +1823,6 @@ static int irda_setsockopt(struct socket struct ias_attrib * ias_attr; /* Attribute in IAS object */ int opt; - IRDA_ASSERT(self != NULL, return -1;); - IRDA_DEBUG(2, "%s(%p)\n", __FUNCTION__, self); if (level != SOL_IRLMP) diff -puN net/irda/ircomm/ircomm_param.c~git-net net/irda/ircomm/ircomm_param.c --- a/net/irda/ircomm/ircomm_param.c~git-net +++ a/net/irda/ircomm/ircomm_param.c @@ -133,8 +133,8 @@ int ircomm_param_request(struct ircomm_t * Inserting is a little bit tricky since we don't know how much * room we will need. But this should hopefully work OK */ - count = irda_param_insert(self, pi, skb->tail, skb_tailroom(skb), - &ircomm_param_info); + count = irda_param_insert(self, pi, skb_tail_pointer(skb), + skb_tailroom(skb), &ircomm_param_info); if (count < 0) { IRDA_WARNING("%s(), no room for parameter!\n", __FUNCTION__); spin_unlock_irqrestore(&self->spinlock, flags); diff -puN net/irda/irlan/irlan_common.c~git-net net/irda/irlan/irlan_common.c --- a/net/irda/irlan/irlan_common.c~git-net +++ a/net/irda/irlan/irlan_common.c @@ -1039,7 +1039,7 @@ static int __irlan_insert_param(struct s } /* Insert at end of sk-buffer */ - frame = skb->tail; + frame = skb_tail_pointer(skb); /* Make space for data */ if (skb_tailroom(skb) < (param_len+value_len+3)) { diff -puN net/irda/irlan/irlan_eth.c~git-net net/irda/irlan/irlan_eth.c --- a/net/irda/irlan/irlan_eth.c~git-net +++ a/net/irda/irlan/irlan_eth.c @@ -234,8 +234,7 @@ int irlan_eth_receive(void *instance, vo * might have been previously set by the low level IrDA network * device driver */ - skb->dev = self->dev; - skb->protocol=eth_type_trans(skb, skb->dev); /* Remove eth header */ + skb->protocol = eth_type_trans(skb, self->dev); /* Remove eth header */ self->stats.rx_packets++; self->stats.rx_bytes += skb->len; diff -puN net/irda/irlap_event.c~git-net net/irda/irlap_event.c --- a/net/irda/irlap_event.c~git-net +++ a/net/irda/irlap_event.c @@ -590,7 +590,7 @@ static int irlap_state_query(struct irla if (!self->discovery_log) { IRDA_WARNING("%s: discovery log is gone! " "maybe the discovery timeout has been set" - " to short?\n", __FUNCTION__); + " too short?\n", __FUNCTION__); break; } hashbin_insert(self->discovery_log, diff -puN net/irda/irlap_frame.c~git-net net/irda/irlap_frame.c --- a/net/irda/irlap_frame.c~git-net +++ a/net/irda/irlap_frame.c @@ -93,7 +93,9 @@ void irlap_queue_xmit(struct irlap_cb *s { /* Some common init stuff */ skb->dev = self->netdev; - skb->h.raw = skb->nh.raw = skb->mac.raw = skb->data; + skb_reset_mac_header(skb); + skb_reset_network_header(skb); + skb_reset_transport_header(skb); skb->protocol = htons(ETH_P_IRDA); skb->priority = TC_PRIO_BESTEFFORT; @@ -411,7 +413,7 @@ static void irlap_recv_discovery_xid_rsp IRDA_ASSERT(self->magic == LAP_MAGIC, return;); if (!pskb_may_pull(skb, sizeof(struct xid_frame))) { - IRDA_ERROR("%s: frame to short!\n", __FUNCTION__); + IRDA_ERROR("%s: frame too short!\n", __FUNCTION__); return; } @@ -482,7 +484,7 @@ static void irlap_recv_discovery_xid_cmd char *text; if (!pskb_may_pull(skb, sizeof(struct xid_frame))) { - IRDA_ERROR("%s: frame to short!\n", __FUNCTION__); + IRDA_ERROR("%s: frame too short!\n", __FUNCTION__); return; } @@ -526,7 +528,7 @@ static void irlap_recv_discovery_xid_cmd /* Check if things are sane at this point... */ if((discovery_info == NULL) || !pskb_may_pull(skb, 3)) { - IRDA_ERROR("%s: discovery frame to short!\n", + IRDA_ERROR("%s: discovery frame too short!\n", __FUNCTION__); return; } @@ -1171,7 +1173,7 @@ static void irlap_recv_frmr_frame(struct IRDA_ASSERT(info != NULL, return;); if (!pskb_may_pull(skb, 4)) { - IRDA_ERROR("%s: frame to short!\n", __FUNCTION__); + IRDA_ERROR("%s: frame too short!\n", __FUNCTION__); return; } @@ -1260,7 +1262,7 @@ static void irlap_recv_test_frame(struct IRDA_DEBUG(2, "%s()\n", __FUNCTION__); if (!pskb_may_pull(skb, sizeof(*frame))) { - IRDA_ERROR("%s: frame to short!\n", __FUNCTION__); + IRDA_ERROR("%s: frame too short!\n", __FUNCTION__); return; } frame = (struct test_frame *) skb->data; @@ -1268,7 +1270,7 @@ static void irlap_recv_test_frame(struct /* Broadcast frames must carry saddr and daddr fields */ if (info->caddr == CBROADCAST) { if (skb->len < sizeof(struct test_frame)) { - IRDA_DEBUG(0, "%s() test frame to short!\n", + IRDA_DEBUG(0, "%s() test frame too short!\n", __FUNCTION__); return; } @@ -1334,7 +1336,7 @@ int irlap_driver_rcv(struct sk_buff *skb /* Check if frame is large enough for parsing */ if (!pskb_may_pull(skb, 2)) { - IRDA_ERROR("%s: frame to short!\n", __FUNCTION__); + IRDA_ERROR("%s: frame too short!\n", __FUNCTION__); dev_kfree_skb(skb); return -1; } diff -puN net/irda/irqueue.c~git-net net/irda/irqueue.c --- a/net/irda/irqueue.c~git-net +++ a/net/irda/irqueue.c @@ -384,6 +384,9 @@ EXPORT_SYMBOL(hashbin_new); * for deallocating this structure if it's complex. If not the user can * just supply kfree, which should take care of the job. */ +#ifdef CONFIG_LOCKDEP +static int hashbin_lock_depth = 0; +#endif int hashbin_delete( hashbin_t* hashbin, FREE_FUNC free_func) { irda_queue_t* queue; @@ -395,7 +398,8 @@ int hashbin_delete( hashbin_t* hashbin, /* Synchronize */ if ( hashbin->hb_type & HB_LOCK ) { - spin_lock_irqsave(&hashbin->hb_spinlock, flags); + spin_lock_irqsave_nested(&hashbin->hb_spinlock, flags, + hashbin_lock_depth++); } /* @@ -419,6 +423,9 @@ int hashbin_delete( hashbin_t* hashbin, /* Release lock */ if ( hashbin->hb_type & HB_LOCK) { spin_unlock_irqrestore(&hashbin->hb_spinlock, flags); +#ifdef CONFIG_LOCKDEP + hashbin_lock_depth--; +#endif } /* diff -puN net/irda/irttp.c~git-net net/irda/irttp.c --- a/net/irda/irttp.c~git-net +++ a/net/irda/irttp.c @@ -256,7 +256,7 @@ static struct sk_buff *irttp_reassemble_ * Copy all fragments to a new buffer */ while ((frag = skb_dequeue(&self->rx_fragments)) != NULL) { - memcpy(skb->data+n, frag->data, frag->len); + skb_copy_to_linear_data_offset(skb, n, frag->data, frag->len); n += frag->len; dev_kfree_skb(frag); @@ -314,8 +314,8 @@ static inline void irttp_fragment_skb(st skb_reserve(frag, self->max_header_size); /* Copy data from the original skb into this fragment. */ - memcpy(skb_put(frag, self->max_seg_size), skb->data, - self->max_seg_size); + skb_copy_from_linear_data(skb, skb_put(frag, self->max_seg_size), + self->max_seg_size); /* Insert TTP header, with the more bit set */ frame = skb_push(frag, TTP_HEADER); @@ -551,7 +551,7 @@ int irttp_udata_request(struct tsap_cb * } if (skb->len > self->max_seg_size) { - IRDA_DEBUG(1, "%s(), UData is to large for IrLAP!\n", + IRDA_DEBUG(1, "%s(), UData is too large for IrLAP!\n", __FUNCTION__); goto err; } @@ -598,7 +598,7 @@ int irttp_data_request(struct tsap_cb *s * inside an IrLAP frame */ if ((self->tx_max_sdu_size == 0) && (skb->len > self->max_seg_size)) { - IRDA_ERROR("%s: SAR disabled, and data is to large for IrLAP!\n", + IRDA_ERROR("%s: SAR disabled, and data is too large for IrLAP!\n", __FUNCTION__); ret = -EMSGSIZE; goto err; diff -puN net/irda/parameters.c~git-net net/irda/parameters.c --- a/net/irda/parameters.c~git-net +++ a/net/irda/parameters.c @@ -160,7 +160,7 @@ static int irda_insert_integer(void *sel } /* Check if buffer is long enough for insertion */ if (len < (2+p.pl)) { - IRDA_WARNING("%s: buffer to short for insertion!\n", + IRDA_WARNING("%s: buffer too short for insertion!\n", __FUNCTION__); return -1; } @@ -216,7 +216,7 @@ static int irda_extract_integer(void *se /* Check if buffer is long enough for parsing */ if (len < (2+p.pl)) { - IRDA_WARNING("%s: buffer to short for parsing! " + IRDA_WARNING("%s: buffer too short for parsing! " "Need %d bytes, but len is only %d\n", __FUNCTION__, p.pl, len); return -1; @@ -304,7 +304,7 @@ static int irda_extract_string(void *sel /* Check if buffer is long enough for parsing */ if (len < (2+p.pl)) { - IRDA_WARNING("%s: buffer to short for parsing! " + IRDA_WARNING("%s: buffer too short for parsing! " "Need %d bytes, but len is only %d\n", __FUNCTION__, p.pl, len); return -1; @@ -343,7 +343,7 @@ static int irda_extract_octseq(void *sel /* Check if buffer is long enough for parsing */ if (len < (2+p.pl)) { - IRDA_WARNING("%s: buffer to short for parsing! " + IRDA_WARNING("%s: buffer too short for parsing! " "Need %d bytes, but len is only %d\n", __FUNCTION__, p.pl, len); return -1; diff -puN net/irda/qos.c~git-net net/irda/qos.c --- a/net/irda/qos.c~git-net +++ a/net/irda/qos.c @@ -469,49 +469,49 @@ int irlap_insert_qos_negotiation_params( int ret; /* Insert data rate */ - ret = irda_param_insert(self, PI_BAUD_RATE, skb->tail, + ret = irda_param_insert(self, PI_BAUD_RATE, skb_tail_pointer(skb), skb_tailroom(skb), &irlap_param_info); if (ret < 0) return ret; skb_put(skb, ret); /* Insert max turnaround time */ - ret = irda_param_insert(self, PI_MAX_TURN_TIME, skb->tail, + ret = irda_param_insert(self, PI_MAX_TURN_TIME, skb_tail_pointer(skb), skb_tailroom(skb), &irlap_param_info); if (ret < 0) return ret; skb_put(skb, ret); /* Insert data size */ - ret = irda_param_insert(self, PI_DATA_SIZE, skb->tail, + ret = irda_param_insert(self, PI_DATA_SIZE, skb_tail_pointer(skb), skb_tailroom(skb), &irlap_param_info); if (ret < 0) return ret; skb_put(skb, ret); /* Insert window size */ - ret = irda_param_insert(self, PI_WINDOW_SIZE, skb->tail, + ret = irda_param_insert(self, PI_WINDOW_SIZE, skb_tail_pointer(skb), skb_tailroom(skb), &irlap_param_info); if (ret < 0) return ret; skb_put(skb, ret); /* Insert additional BOFs */ - ret = irda_param_insert(self, PI_ADD_BOFS, skb->tail, + ret = irda_param_insert(self, PI_ADD_BOFS, skb_tail_pointer(skb), skb_tailroom(skb), &irlap_param_info); if (ret < 0) return ret; skb_put(skb, ret); /* Insert minimum turnaround time */ - ret = irda_param_insert(self, PI_MIN_TURN_TIME, skb->tail, + ret = irda_param_insert(self, PI_MIN_TURN_TIME, skb_tail_pointer(skb), skb_tailroom(skb), &irlap_param_info); if (ret < 0) return ret; skb_put(skb, ret); /* Insert link disconnect/threshold time */ - ret = irda_param_insert(self, PI_LINK_DISC, skb->tail, + ret = irda_param_insert(self, PI_LINK_DISC, skb_tail_pointer(skb), skb_tailroom(skb), &irlap_param_info); if (ret < 0) return ret; diff -puN net/irda/wrapper.c~git-net net/irda/wrapper.c --- a/net/irda/wrapper.c~git-net +++ a/net/irda/wrapper.c @@ -239,7 +239,8 @@ async_bump(struct net_device *dev, if(docopy) { /* Copy data without CRC (lenght already checked) */ - memcpy(newskb->data, rx_buff->data, rx_buff->len - 2); + skb_copy_to_linear_data(newskb, rx_buff->data, + rx_buff->len - 2); /* Deliver this skb */ dataskb = newskb; } else { @@ -256,7 +257,7 @@ async_bump(struct net_device *dev, /* Feed it to IrLAP layer */ dataskb->dev = dev; - dataskb->mac.raw = dataskb->data; + skb_reset_mac_header(dataskb); dataskb->protocol = htons(ETH_P_IRDA); netif_rx(dataskb); diff -puN net/iucv/af_iucv.c~git-net net/iucv/af_iucv.c --- a/net/iucv/af_iucv.c~git-net +++ a/net/iucv/af_iucv.c @@ -181,7 +181,7 @@ static void iucv_sock_close(struct sock default: sock_set_flag(sk, SOCK_ZAPPED); break; - }; + } release_sock(sk); iucv_sock_kill(sk); @@ -953,8 +953,8 @@ static void iucv_callback_rx(struct iucv return; } - skb->h.raw = skb->data; - skb->nh.raw = skb->data; + skb_reset_transport_header(skb); + skb_reset_network_header(skb); skb->len = msg->length; } diff -puN net/key/af_key.c~git-net net/key/af_key.c --- a/net/key/af_key.c~git-net +++ a/net/key/af_key.c @@ -379,7 +379,7 @@ static int verify_address_len(void *p) */ return -EINVAL; break; - }; + } return 0; } @@ -3667,7 +3667,7 @@ static int pfkey_recvmsg(struct kiocb *k copied = len; } - skb->h.raw = skb->data; + skb_reset_transport_header(skb); err = skb_copy_datagram_iovec(skb, 0, msg->msg_iov, copied); if (err) goto out_free; diff -puN net/llc/llc_input.c~git-net net/llc/llc_input.c --- a/net/llc/llc_input.c~git-net +++ a/net/llc/llc_input.c @@ -112,7 +112,7 @@ static inline int llc_fixup_skb(struct s if (unlikely(!pskb_may_pull(skb, llc_len))) return 0; - skb->h.raw += llc_len; + skb->transport_header += llc_len; skb_pull(skb, llc_len); if (skb->protocol == htons(ETH_P_802_2)) { __be16 pdulen = eth_hdr(skb)->h_proto; diff -puN net/llc/llc_output.c~git-net net/llc/llc_output.c --- a/net/llc/llc_output.c~git-net +++ a/net/llc/llc_output.c @@ -41,7 +41,8 @@ int llc_mac_hdr_init(struct sk_buff *skb struct net_device *dev = skb->dev; struct trh_hdr *trh; - skb->mac.raw = skb_push(skb, sizeof(*trh)); + skb_push(skb, sizeof(*trh)); + skb_reset_mac_header(skb); trh = tr_hdr(skb); trh->ac = AC; trh->fc = LLC_FRAME; @@ -52,7 +53,7 @@ int llc_mac_hdr_init(struct sk_buff *skb if (da) { memcpy(trh->daddr, da, dev->addr_len); tr_source_route(skb, trh, dev); - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); } break; } @@ -62,7 +63,8 @@ int llc_mac_hdr_init(struct sk_buff *skb unsigned short len = skb->len; struct ethhdr *eth; - skb->mac.raw = skb_push(skb, sizeof(*eth)); + skb_push(skb, sizeof(*eth)); + skb_reset_mac_header(skb); eth = eth_hdr(skb); eth->h_proto = htons(len); memcpy(eth->h_dest, da, ETH_ALEN); diff -puN net/llc/llc_sap.c~git-net net/llc/llc_sap.c --- a/net/llc/llc_sap.c~git-net +++ a/net/llc/llc_sap.c @@ -36,11 +36,12 @@ struct sk_buff *llc_alloc_frame(struct s struct sk_buff *skb = alloc_skb(128, GFP_ATOMIC); if (skb) { + skb_reset_mac_header(skb); skb_reserve(skb, 50); - skb->nh.raw = skb->h.raw = skb->data; + skb_reset_network_header(skb); + skb_reset_transport_header(skb); skb->protocol = htons(ETH_P_802_2); skb->dev = dev; - skb->mac.raw = skb->head; if (sk != NULL) skb_set_owner_w(skb, sk); } diff -puN net/netfilter/Kconfig~git-net net/netfilter/Kconfig --- a/net/netfilter/Kconfig~git-net +++ a/net/netfilter/Kconfig @@ -25,6 +25,7 @@ config NETFILTER_NETLINK_LOG and is also scheduled to replace the old syslog-based ipt_LOG and ip6t_LOG modules. +# Rename this to NF_CONNTRACK in a 2.6.25 config NF_CONNTRACK_ENABLED tristate "Netfilter connection tracking support" help @@ -39,42 +40,9 @@ config NF_CONNTRACK_ENABLED To compile it as a module, choose M here. If unsure, say N. -choice - prompt "Netfilter connection tracking support" - depends on NF_CONNTRACK_ENABLED - -config NF_CONNTRACK_SUPPORT - bool "Layer 3 Independent Connection tracking" - help - Layer 3 independent connection tracking is experimental scheme - which generalize ip_conntrack to support other layer 3 protocols. - - This is required to do Masquerading or other kinds of Network - Address Translation (except for Fast NAT). It can also be used to - enhance packet filtering (see `Connection state match support' - below). - -config IP_NF_CONNTRACK_SUPPORT - bool "Layer 3 Dependent Connection tracking (OBSOLETE)" - help - The old, Layer 3 dependent ip_conntrack subsystem of netfilter. - - This is required to do Masquerading or other kinds of Network - Address Translation (except for Fast NAT). It can also be used to - enhance packet filtering (see `Connection state match support' - below). - -endchoice - config NF_CONNTRACK tristate - default m if NF_CONNTRACK_SUPPORT && NF_CONNTRACK_ENABLED=m - default y if NF_CONNTRACK_SUPPORT && NF_CONNTRACK_ENABLED=y - -config IP_NF_CONNTRACK - tristate - default m if IP_NF_CONNTRACK_SUPPORT && NF_CONNTRACK_ENABLED=m - default y if IP_NF_CONNTRACK_SUPPORT && NF_CONNTRACK_ENABLED=y + default NF_CONNTRACK_ENABLED config NF_CT_ACCT bool "Connection tracking flow accounting" @@ -303,9 +271,8 @@ config NETFILTER_XT_TARGET_CONNMARK tristate '"CONNMARK" target support' depends on NETFILTER_XTABLES depends on IP_NF_MANGLE || IP6_NF_MANGLE - depends on IP_NF_CONNTRACK || NF_CONNTRACK - select IP_NF_CONNTRACK_MARK if IP_NF_CONNTRACK - select NF_CONNTRACK_MARK if NF_CONNTRACK + depends on NF_CONNTRACK + select NF_CONNTRACK_MARK help This option adds a `CONNMARK' target, which allows one to manipulate the connection mark value. Similar to the MARK target, but @@ -366,7 +333,7 @@ config NETFILTER_XT_TARGET_NOTRACK tristate '"NOTRACK" target support' depends on NETFILTER_XTABLES depends on IP_NF_RAW || IP6_NF_RAW - depends on IP_NF_CONNTRACK || NF_CONNTRACK + depends on NF_CONNTRACK help The NOTRACK target allows a select rule to specify which packets *not* to enter the conntrack/NAT @@ -387,9 +354,7 @@ config NETFILTER_XT_TARGET_SECMARK config NETFILTER_XT_TARGET_CONNSECMARK tristate '"CONNSECMARK" target support' - depends on NETFILTER_XTABLES && \ - ((NF_CONNTRACK && NF_CONNTRACK_SECMARK) || \ - (IP_NF_CONNTRACK && IP_NF_CONNTRACK_SECMARK)) + depends on NETFILTER_XTABLES && NF_CONNTRACK && NF_CONNTRACK_SECMARK help The CONNSECMARK target copies security markings from packets to connections, and restores security markings from connections @@ -437,9 +402,8 @@ config NETFILTER_XT_MATCH_COMMENT config NETFILTER_XT_MATCH_CONNBYTES tristate '"connbytes" per-connection counter match support' depends on NETFILTER_XTABLES - depends on IP_NF_CONNTRACK || NF_CONNTRACK - select IP_NF_CT_ACCT if IP_NF_CONNTRACK - select NF_CT_ACCT if NF_CONNTRACK + depends on NF_CONNTRACK + select NF_CT_ACCT help This option adds a `connbytes' match, which allows you to match the number of bytes and/or packets for each direction within a connection. @@ -450,9 +414,8 @@ config NETFILTER_XT_MATCH_CONNBYTES config NETFILTER_XT_MATCH_CONNMARK tristate '"connmark" connection mark match support' depends on NETFILTER_XTABLES - depends on IP_NF_CONNTRACK || NF_CONNTRACK - select IP_NF_CONNTRACK_MARK if IP_NF_CONNTRACK - select NF_CONNTRACK_MARK if NF_CONNTRACK + depends on NF_CONNTRACK + select NF_CONNTRACK_MARK help This option adds a `connmark' match, which allows you to match the connection mark value previously set for the session by `CONNMARK'. @@ -464,7 +427,7 @@ config NETFILTER_XT_MATCH_CONNMARK config NETFILTER_XT_MATCH_CONNTRACK tristate '"conntrack" connection tracking match support' depends on NETFILTER_XTABLES - depends on IP_NF_CONNTRACK || NF_CONNTRACK + depends on NF_CONNTRACK help This is a general conntrack match module, a superset of the state match. @@ -508,7 +471,7 @@ config NETFILTER_XT_MATCH_ESP config NETFILTER_XT_MATCH_HELPER tristate '"helper" match support' depends on NETFILTER_XTABLES - depends on IP_NF_CONNTRACK || NF_CONNTRACK + depends on NF_CONNTRACK help Helper matching allows you to match packets in dynamic connections tracked by a conntrack-helper, ie. ip_conntrack_ftp @@ -632,7 +595,7 @@ config NETFILTER_XT_MATCH_SCTP config NETFILTER_XT_MATCH_STATE tristate '"state" match support' depends on NETFILTER_XTABLES - depends on IP_NF_CONNTRACK || NF_CONNTRACK + depends on NF_CONNTRACK help Connection state matching allows you to match packets based on their relationship to a tracked connection (ie. previous packets). This diff -puN net/netfilter/core.c~git-net net/netfilter/core.c --- a/net/netfilter/core.c~git-net +++ a/net/netfilter/core.c @@ -5,10 +5,6 @@ * way. * * Rusty Russell (C)2000 -- This code is GPL. - * - * February 2000: Modified by James Morris to have 1 queue per protocol. - * 15-Mar-2000: Added NF_REPEAT --RR. - * 08-May-2003: Internal logging interface added by Jozsef Kadlecsik. */ #include #include @@ -244,6 +240,7 @@ void nf_proto_csum_replace4(__sum16 *sum } EXPORT_SYMBOL(nf_proto_csum_replace4); +#if defined(CONFIG_NF_CONNTRACK) || defined(CONFIG_NF_CONNTRACK_MODULE) /* This does not belong here, but locally generated errors need it if connection tracking in use: without this, connection may not be in hash table, and hence manufactured ICMP or RST packets will not be associated with it. */ @@ -264,6 +261,22 @@ void nf_ct_attach(struct sk_buff *new, s } EXPORT_SYMBOL(nf_ct_attach); +void (*nf_ct_destroy)(struct nf_conntrack *); +EXPORT_SYMBOL(nf_ct_destroy); + +void nf_conntrack_destroy(struct nf_conntrack *nfct) +{ + void (*destroy)(struct nf_conntrack *); + + rcu_read_lock(); + destroy = rcu_dereference(nf_ct_destroy); + BUG_ON(destroy == NULL); + destroy(nfct); + rcu_read_unlock(); +} +EXPORT_SYMBOL(nf_conntrack_destroy); +#endif /* CONFIG_NF_CONNTRACK */ + #ifdef CONFIG_PROC_FS struct proc_dir_entry *proc_net_netfilter; EXPORT_SYMBOL(proc_net_netfilter); diff -puN net/netfilter/nf_conntrack_core.c~git-net net/netfilter/nf_conntrack_core.c --- a/net/netfilter/nf_conntrack_core.c~git-net +++ a/net/netfilter/nf_conntrack_core.c @@ -9,24 +9,6 @@ * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as * published by the Free Software Foundation. - * - * 23 Apr 2001: Harald Welte - * - new API and handling of conntrack/nat helpers - * - now capable of multiple expectations for one master - * 16 Jul 2002: Harald Welte - * - add usage/reference counts to ip_conntrack_expect - * - export ip_conntrack[_expect]_{find_get,put} functions - * 16 Dec 2003: Yasuyuki Kozakai @USAGI - * - generalize L3 protocol denendent part. - * 23 Mar 2004: Yasuyuki Kozakai @USAGI - * - add support various size of conntrack structures. - * 26 Jan 2006: Harald Welte - * - restructure nf_conn (introduce nf_conn_help) - * - redesign 'features' how they were originally intended - * 26 Feb 2006: Pablo Neira Ayuso - * - add support for L3 protocol module load on demand. - * - * Derived from net/ipv4/netfilter/ip_conntrack_core.c */ #include @@ -128,10 +110,11 @@ static u_int32_t __hash_conntrack(const unsigned int size, unsigned int rnd) { unsigned int a, b; - a = jhash((void *)tuple->src.u3.all, sizeof(tuple->src.u3.all), - ((tuple->src.l3num) << 16) | tuple->dst.protonum); - b = jhash((void *)tuple->dst.u3.all, sizeof(tuple->dst.u3.all), - (tuple->src.u.all << 16) | tuple->dst.u.all); + + a = jhash2(tuple->src.u3.all, ARRAY_SIZE(tuple->src.u3.all), + (tuple->src.l3num << 16) | tuple->dst.protonum); + b = jhash2(tuple->dst.u3.all, ARRAY_SIZE(tuple->dst.u3.all), + (tuple->src.u.all << 16) | tuple->dst.u.all); return jhash_2words(a, b, rnd) % size; } @@ -633,13 +616,11 @@ __nf_conntrack_alloc(const struct nf_con memset(conntrack, 0, nf_ct_cache[features].size); conntrack->features = features; atomic_set(&conntrack->ct_general.use, 1); - conntrack->ct_general.destroy = destroy_conntrack; conntrack->tuplehash[IP_CT_DIR_ORIGINAL].tuple = *orig; conntrack->tuplehash[IP_CT_DIR_REPLY].tuple = *repl; /* Don't set timer yet: wait for confirmation */ - init_timer(&conntrack->timeout); - conntrack->timeout.data = (unsigned long)conntrack; - conntrack->timeout.function = death_by_timeout; + setup_timer(&conntrack->timeout, death_by_timeout, + (unsigned long)conntrack); read_unlock_bh(&nf_ct_cache_lock); return conntrack; @@ -768,7 +749,7 @@ resolve_normal_ct(struct sk_buff *skb, struct nf_conntrack_tuple_hash *h; struct nf_conn *ct; - if (!nf_ct_get_tuple(skb, (unsigned int)(skb->nh.raw - skb->data), + if (!nf_ct_get_tuple(skb, skb_network_offset(skb), dataoff, l3num, protonum, &tuple, l3proto, l4proto)) { DEBUGP("resolve_normal_ct: Can't get tuple\n"); @@ -960,7 +941,7 @@ void __nf_ct_refresh_acct(struct nf_conn if (do_acct) { ct->counters[CTINFO2DIR(ctinfo)].packets++; ct->counters[CTINFO2DIR(ctinfo)].bytes += - skb->len - (unsigned int)(skb->nh.raw - skb->data); + skb->len - skb_network_offset(skb); if ((ct->counters[CTINFO2DIR(ctinfo)].packets & 0x80000000) || (ct->counters[CTINFO2DIR(ctinfo)].bytes & 0x80000000)) @@ -1140,6 +1121,8 @@ void nf_conntrack_cleanup(void) while (atomic_read(&nf_conntrack_untracked.ct_general.use) > 1) schedule(); + rcu_assign_pointer(nf_ct_destroy, NULL); + for (i = 0; i < NF_CT_F_NUM; i++) { if (nf_ct_cache[i].use == 0) continue; @@ -1152,14 +1135,7 @@ void nf_conntrack_cleanup(void) free_conntrack_hash(nf_conntrack_hash, nf_conntrack_vmalloc, nf_conntrack_htable_size); - nf_conntrack_l4proto_unregister(&nf_conntrack_l4proto_generic); - - /* free l3proto protocol tables */ - for (i = 0; i < PF_MAX; i++) - if (nf_ct_protos[i]) { - kfree(nf_ct_protos[i]); - nf_ct_protos[i] = NULL; - } + nf_conntrack_proto_fini(); } static struct list_head *alloc_hashtable(int size, int *vmalloced) @@ -1237,7 +1213,6 @@ module_param_call(hashsize, set_hashsize int __init nf_conntrack_init(void) { - unsigned int i; int ret; /* Idea from tcp.c: use 1/16384 of memory. On i386: 32MB @@ -1279,18 +1254,13 @@ int __init nf_conntrack_init(void) goto err_free_conntrack_slab; } - ret = nf_conntrack_l4proto_register(&nf_conntrack_l4proto_generic); + ret = nf_conntrack_proto_init(); if (ret < 0) goto out_free_expect_slab; - /* Don't NEED lock here, but good form anyway. */ - write_lock_bh(&nf_conntrack_lock); - for (i = 0; i < AF_MAX; i++) - nf_ct_l3protos[i] = &nf_conntrack_l3proto_generic; - write_unlock_bh(&nf_conntrack_lock); - /* For use by REJECT target */ rcu_assign_pointer(ip_ct_attach, __nf_conntrack_attach); + rcu_assign_pointer(nf_ct_destroy, destroy_conntrack); /* Set up fake conntrack: - to never be deleted, not in any hashes */ diff -puN net/netfilter/nf_conntrack_ecache.c~git-net net/netfilter/nf_conntrack_ecache.c --- a/net/netfilter/nf_conntrack_ecache.c~git-net +++ a/net/netfilter/nf_conntrack_ecache.c @@ -91,3 +91,26 @@ void nf_ct_event_cache_flush(void) } } +int nf_conntrack_register_notifier(struct notifier_block *nb) +{ + return atomic_notifier_chain_register(&nf_conntrack_chain, nb); +} +EXPORT_SYMBOL_GPL(nf_conntrack_register_notifier); + +int nf_conntrack_unregister_notifier(struct notifier_block *nb) +{ + return atomic_notifier_chain_unregister(&nf_conntrack_chain, nb); +} +EXPORT_SYMBOL_GPL(nf_conntrack_unregister_notifier); + +int nf_conntrack_expect_register_notifier(struct notifier_block *nb) +{ + return atomic_notifier_chain_register(&nf_conntrack_expect_chain, nb); +} +EXPORT_SYMBOL_GPL(nf_conntrack_expect_register_notifier); + +int nf_conntrack_expect_unregister_notifier(struct notifier_block *nb) +{ + return atomic_notifier_chain_unregister(&nf_conntrack_expect_chain, nb); +} +EXPORT_SYMBOL_GPL(nf_conntrack_expect_unregister_notifier); diff -puN net/netfilter/nf_conntrack_expect.c~git-net net/netfilter/nf_conntrack_expect.c --- a/net/netfilter/nf_conntrack_expect.c~git-net +++ a/net/netfilter/nf_conntrack_expect.c @@ -290,9 +290,7 @@ static void nf_conntrack_expect_insert(s master_help->expecting++; list_add(&exp->list, &nf_conntrack_expect_list); - init_timer(&exp->timeout); - exp->timeout.data = (unsigned long)exp; - exp->timeout.function = expectation_timed_out; + setup_timer(&exp->timeout, expectation_timed_out, (unsigned long)exp); exp->timeout.expires = jiffies + master_help->helper->timeout * HZ; add_timer(&exp->timeout); diff -puN net/netfilter/nf_conntrack_ftp.c~git-net net/netfilter/nf_conntrack_ftp.c --- a/net/netfilter/nf_conntrack_ftp.c~git-net +++ a/net/netfilter/nf_conntrack_ftp.c @@ -7,12 +7,6 @@ * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as * published by the Free Software Foundation. - * - * 16 Dec 2003: Yasuyuki Kozakai @USAGI - * - enable working with Layer 3 protocol independent connection tracking. - * - track EPRT and EPSV commands with IPv6 address. - * - * Derived from net/ipv4/netfilter/ip_conntrack_ftp.c */ #include diff -puN net/netfilter/nf_conntrack_netbios_ns.c~git-net net/netfilter/nf_conntrack_netbios_ns.c --- a/net/netfilter/nf_conntrack_netbios_ns.c~git-net +++ a/net/netfilter/nf_conntrack_netbios_ns.c @@ -46,7 +46,7 @@ static int help(struct sk_buff **pskb, u struct nf_conn *ct, enum ip_conntrack_info ctinfo) { struct nf_conntrack_expect *exp; - struct iphdr *iph = (*pskb)->nh.iph; + struct iphdr *iph = ip_hdr(*pskb); struct rtable *rt = (struct rtable *)(*pskb)->dst; struct in_device *in_dev; __be32 mask = 0; diff -puN net/netfilter/nf_conntrack_netlink.c~git-net net/netfilter/nf_conntrack_netlink.c --- a/net/netfilter/nf_conntrack_netlink.c~git-net +++ a/net/netfilter/nf_conntrack_netlink.c @@ -6,9 +6,6 @@ * (C) 2003 by Patrick Mchardy * (C) 2005-2006 by Pablo Neira Ayuso * - * I've reworked this stuff to use attributes instead of conntrack - * structures. 5.44 am. I need more tea. --pablo 05/07/11. - * * Initial connection tracking via netlink development funded and * generally made possible by Network Robots, Inc. (www.networkrobots.com) * @@ -16,8 +13,6 @@ * * This software may be used and distributed according to the terms * of the GNU General Public License, incorporated herein by reference. - * - * Derived from ip_conntrack_netlink.c: Port by Pablo Neira Ayuso (05/11/14) */ #include @@ -33,6 +28,7 @@ #include #include +#include #include #include #include @@ -268,9 +264,7 @@ ctnetlink_fill_info(struct sk_buff *skb, struct nlmsghdr *nlh; struct nfgenmsg *nfmsg; struct nfattr *nest_parms; - unsigned char *b; - - b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); event |= NFNL_SUBSYS_CTNETLINK << 8; nlh = NLMSG_PUT(skb, pid, seq, event, sizeof(struct nfgenmsg)); @@ -303,12 +297,12 @@ ctnetlink_fill_info(struct sk_buff *skb, ctnetlink_dump_use(skb, ct) < 0) goto nfattr_failure; - nlh->nlmsg_len = skb->tail - b; + nlh->nlmsg_len = skb_tail_pointer(skb) - b; return skb->len; nlmsg_failure: nfattr_failure: - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); return -1; } @@ -322,7 +316,7 @@ static int ctnetlink_conntrack_event(str struct nf_conn *ct = (struct nf_conn *)ptr; struct sk_buff *skb; unsigned int type; - unsigned char *b; + sk_buff_data_t b; unsigned int flags = 0, group; /* ignore our fake conntrack entry */ @@ -662,7 +656,7 @@ static const size_t cta_min[CTA_MAX] = { static int ctnetlink_del_conntrack(struct sock *ctnl, struct sk_buff *skb, - struct nlmsghdr *nlh, struct nfattr *cda[], int *errp) + struct nlmsghdr *nlh, struct nfattr *cda[]) { struct nf_conntrack_tuple_hash *h; struct nf_conntrack_tuple tuple; @@ -710,7 +704,7 @@ ctnetlink_del_conntrack(struct sock *ctn static int ctnetlink_get_conntrack(struct sock *ctnl, struct sk_buff *skb, - struct nlmsghdr *nlh, struct nfattr *cda[], int *errp) + struct nlmsghdr *nlh, struct nfattr *cda[]) { struct nf_conntrack_tuple_hash *h; struct nf_conntrack_tuple tuple; @@ -721,22 +715,12 @@ ctnetlink_get_conntrack(struct sock *ctn int err = 0; if (nlh->nlmsg_flags & NLM_F_DUMP) { - u32 rlen; - #ifndef CONFIG_NF_CT_ACCT if (NFNL_MSG_TYPE(nlh->nlmsg_type) == IPCTNL_MSG_CT_GET_CTRZERO) return -ENOTSUPP; #endif - if ((*errp = netlink_dump_start(ctnl, skb, nlh, - ctnetlink_dump_table, - ctnetlink_done)) != 0) - return -EINVAL; - - rlen = NLMSG_ALIGN(nlh->nlmsg_len); - if (rlen > skb->len) - rlen = skb->len; - skb_pull(skb, rlen); - return 0; + return netlink_dump_start(ctnl, skb, nlh, ctnetlink_dump_table, + ctnetlink_done); } if (nfattr_bad_size(cda, CTA_MAX, cta_min)) @@ -1010,7 +994,7 @@ err: static int ctnetlink_new_conntrack(struct sock *ctnl, struct sk_buff *skb, - struct nlmsghdr *nlh, struct nfattr *cda[], int *errp) + struct nlmsghdr *nlh, struct nfattr *cda[]) { struct nf_conntrack_tuple otuple, rtuple; struct nf_conntrack_tuple_hash *h = NULL; @@ -1152,9 +1136,7 @@ ctnetlink_exp_fill_info(struct sk_buff * { struct nlmsghdr *nlh; struct nfgenmsg *nfmsg; - unsigned char *b; - - b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); event |= NFNL_SUBSYS_CTNETLINK_EXP << 8; nlh = NLMSG_PUT(skb, pid, seq, event, sizeof(struct nfgenmsg)); @@ -1168,12 +1150,12 @@ ctnetlink_exp_fill_info(struct sk_buff * if (ctnetlink_exp_dump_expect(skb, exp) < 0) goto nfattr_failure; - nlh->nlmsg_len = skb->tail - b; + nlh->nlmsg_len = skb_tail_pointer(skb) - b; return skb->len; nlmsg_failure: nfattr_failure: - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); return -1; } @@ -1186,7 +1168,7 @@ static int ctnetlink_expect_event(struct struct nf_conntrack_expect *exp = (struct nf_conntrack_expect *)ptr; struct sk_buff *skb; unsigned int type; - unsigned char *b; + sk_buff_data_t b; int flags = 0; if (events & IPEXP_NEW) { @@ -1263,7 +1245,7 @@ static const size_t cta_min_exp[CTA_EXPE static int ctnetlink_get_expect(struct sock *ctnl, struct sk_buff *skb, - struct nlmsghdr *nlh, struct nfattr *cda[], int *errp) + struct nlmsghdr *nlh, struct nfattr *cda[]) { struct nf_conntrack_tuple tuple; struct nf_conntrack_expect *exp; @@ -1276,17 +1258,9 @@ ctnetlink_get_expect(struct sock *ctnl, return -EINVAL; if (nlh->nlmsg_flags & NLM_F_DUMP) { - u32 rlen; - - if ((*errp = netlink_dump_start(ctnl, skb, nlh, - ctnetlink_exp_dump_table, - ctnetlink_done)) != 0) - return -EINVAL; - rlen = NLMSG_ALIGN(nlh->nlmsg_len); - if (rlen > skb->len) - rlen = skb->len; - skb_pull(skb, rlen); - return 0; + return netlink_dump_start(ctnl, skb, nlh, + ctnetlink_exp_dump_table, + ctnetlink_done); } if (cda[CTA_EXPECT_MASTER-1]) @@ -1333,7 +1307,7 @@ out: static int ctnetlink_del_expect(struct sock *ctnl, struct sk_buff *skb, - struct nlmsghdr *nlh, struct nfattr *cda[], int *errp) + struct nlmsghdr *nlh, struct nfattr *cda[]) { struct nf_conntrack_expect *exp, *tmp; struct nf_conntrack_tuple tuple; @@ -1467,7 +1441,7 @@ out: static int ctnetlink_new_expect(struct sock *ctnl, struct sk_buff *skb, - struct nlmsghdr *nlh, struct nfattr *cda[], int *errp) + struct nlmsghdr *nlh, struct nfattr *cda[]) { struct nf_conntrack_tuple tuple; struct nf_conntrack_expect *exp; diff -puN net/netfilter/nf_conntrack_proto.c~git-net net/netfilter/nf_conntrack_proto.c --- a/net/netfilter/nf_conntrack_proto.c~git-net +++ a/net/netfilter/nf_conntrack_proto.c @@ -32,9 +32,9 @@ struct nf_conntrack_l4proto **nf_ct_prot struct nf_conntrack_l3proto *nf_ct_l3protos[AF_MAX] __read_mostly; EXPORT_SYMBOL_GPL(nf_ct_l3protos); -#ifdef CONFIG_SYSCTL -static DEFINE_MUTEX(nf_ct_proto_sysctl_mutex); +static DEFINE_MUTEX(nf_ct_proto_mutex); +#ifdef CONFIG_SYSCTL static int nf_ct_register_sysctl(struct ctl_table_header **header, struct ctl_table *path, struct ctl_table *table, unsigned int *users) @@ -164,13 +164,11 @@ static int nf_ct_l3proto_register_sysctl int err = 0; #ifdef CONFIG_SYSCTL - mutex_lock(&nf_ct_proto_sysctl_mutex); if (l3proto->ctl_table != NULL) { err = nf_ct_register_sysctl(&l3proto->ctl_table_header, l3proto->ctl_table_path, l3proto->ctl_table, NULL); } - mutex_unlock(&nf_ct_proto_sysctl_mutex); #endif return err; } @@ -178,11 +176,9 @@ static int nf_ct_l3proto_register_sysctl static void nf_ct_l3proto_unregister_sysctl(struct nf_conntrack_l3proto *l3proto) { #ifdef CONFIG_SYSCTL - mutex_lock(&nf_ct_proto_sysctl_mutex); if (l3proto->ctl_table_header != NULL) nf_ct_unregister_sysctl(&l3proto->ctl_table_header, l3proto->ctl_table, NULL); - mutex_unlock(&nf_ct_proto_sysctl_mutex); #endif } @@ -190,27 +186,23 @@ int nf_conntrack_l3proto_register(struct { int ret = 0; - if (proto->l3proto >= AF_MAX) { - ret = -EBUSY; - goto out; - } + if (proto->l3proto >= AF_MAX) + return -EBUSY; - write_lock_bh(&nf_conntrack_lock); + mutex_lock(&nf_ct_proto_mutex); if (nf_ct_l3protos[proto->l3proto] != &nf_conntrack_l3proto_generic) { ret = -EBUSY; goto out_unlock; } - rcu_assign_pointer(nf_ct_l3protos[proto->l3proto], proto); - write_unlock_bh(&nf_conntrack_lock); ret = nf_ct_l3proto_register_sysctl(proto); if (ret < 0) - nf_conntrack_l3proto_unregister(proto); - return ret; + goto out_unlock; + + rcu_assign_pointer(nf_ct_l3protos[proto->l3proto], proto); out_unlock: - write_unlock_bh(&nf_conntrack_lock); -out: + mutex_unlock(&nf_ct_proto_mutex); return ret; } EXPORT_SYMBOL_GPL(nf_conntrack_l3proto_register); @@ -219,14 +211,14 @@ void nf_conntrack_l3proto_unregister(str { BUG_ON(proto->l3proto >= AF_MAX); - write_lock_bh(&nf_conntrack_lock); + mutex_lock(&nf_ct_proto_mutex); BUG_ON(nf_ct_l3protos[proto->l3proto] != proto); rcu_assign_pointer(nf_ct_l3protos[proto->l3proto], &nf_conntrack_l3proto_generic); - write_unlock_bh(&nf_conntrack_lock); - synchronize_rcu(); - nf_ct_l3proto_unregister_sysctl(proto); + mutex_unlock(&nf_ct_proto_mutex); + + synchronize_rcu(); /* Remove all contrack entries for this protocol */ nf_ct_iterate_cleanup(kill_l3proto, proto); @@ -238,7 +230,6 @@ static int nf_ct_l4proto_register_sysctl int err = 0; #ifdef CONFIG_SYSCTL - mutex_lock(&nf_ct_proto_sysctl_mutex); if (l4proto->ctl_table != NULL) { err = nf_ct_register_sysctl(l4proto->ctl_table_header, nf_net_netfilter_sysctl_path, @@ -260,7 +251,6 @@ static int nf_ct_l4proto_register_sysctl } #endif /* CONFIG_NF_CONNTRACK_PROC_COMPAT */ out: - mutex_unlock(&nf_ct_proto_sysctl_mutex); #endif /* CONFIG_SYSCTL */ return err; } @@ -268,7 +258,6 @@ out: static void nf_ct_l4proto_unregister_sysctl(struct nf_conntrack_l4proto *l4proto) { #ifdef CONFIG_SYSCTL - mutex_lock(&nf_ct_proto_sysctl_mutex); if (l4proto->ctl_table_header != NULL && *l4proto->ctl_table_header != NULL) nf_ct_unregister_sysctl(l4proto->ctl_table_header, @@ -279,7 +268,6 @@ static void nf_ct_l4proto_unregister_sys nf_ct_unregister_sysctl(&l4proto->ctl_compat_table_header, l4proto->ctl_compat_table, NULL); #endif /* CONFIG_NF_CONNTRACK_PROC_COMPAT */ - mutex_unlock(&nf_ct_proto_sysctl_mutex); #endif /* CONFIG_SYSCTL */ } @@ -289,68 +277,41 @@ int nf_conntrack_l4proto_register(struct { int ret = 0; - if (l4proto->l3proto >= PF_MAX) { - ret = -EBUSY; - goto out; - } - - if (l4proto == &nf_conntrack_l4proto_generic) - return nf_ct_l4proto_register_sysctl(l4proto); + if (l4proto->l3proto >= PF_MAX) + return -EBUSY; -retry: - write_lock_bh(&nf_conntrack_lock); - if (nf_ct_protos[l4proto->l3proto]) { - if (nf_ct_protos[l4proto->l3proto][l4proto->l4proto] - != &nf_conntrack_l4proto_generic) { - ret = -EBUSY; - goto out_unlock; - } - } else { + mutex_lock(&nf_ct_proto_mutex); + if (!nf_ct_protos[l4proto->l3proto]) { /* l3proto may be loaded latter. */ struct nf_conntrack_l4proto **proto_array; int i; - write_unlock_bh(&nf_conntrack_lock); - - proto_array = (struct nf_conntrack_l4proto **) - kmalloc(MAX_NF_CT_PROTO * - sizeof(struct nf_conntrack_l4proto *), - GFP_KERNEL); + proto_array = kmalloc(MAX_NF_CT_PROTO * + sizeof(struct nf_conntrack_l4proto *), + GFP_KERNEL); if (proto_array == NULL) { ret = -ENOMEM; - goto out; + goto out_unlock; } + for (i = 0; i < MAX_NF_CT_PROTO; i++) proto_array[i] = &nf_conntrack_l4proto_generic; - - write_lock_bh(&nf_conntrack_lock); - if (nf_ct_protos[l4proto->l3proto]) { - /* bad timing, but no problem */ - write_unlock_bh(&nf_conntrack_lock); - kfree(proto_array); - } else { - nf_ct_protos[l4proto->l3proto] = proto_array; - write_unlock_bh(&nf_conntrack_lock); - } - - /* - * Just once because array is never freed until unloading - * nf_conntrack.ko - */ - goto retry; + nf_ct_protos[l4proto->l3proto] = proto_array; + } else if (nf_ct_protos[l4proto->l3proto][l4proto->l4proto] != + &nf_conntrack_l4proto_generic) { + ret = -EBUSY; + goto out_unlock; } - rcu_assign_pointer(nf_ct_protos[l4proto->l3proto][l4proto->l4proto], l4proto); - write_unlock_bh(&nf_conntrack_lock); - ret = nf_ct_l4proto_register_sysctl(l4proto); if (ret < 0) - nf_conntrack_l4proto_unregister(l4proto); - return ret; + goto out_unlock; + + rcu_assign_pointer(nf_ct_protos[l4proto->l3proto][l4proto->l4proto], + l4proto); out_unlock: - write_unlock_bh(&nf_conntrack_lock); -out: + mutex_unlock(&nf_ct_proto_mutex); return ret; } EXPORT_SYMBOL_GPL(nf_conntrack_l4proto_register); @@ -359,21 +320,42 @@ void nf_conntrack_l4proto_unregister(str { BUG_ON(l4proto->l3proto >= PF_MAX); - if (l4proto == &nf_conntrack_l4proto_generic) { - nf_ct_l4proto_unregister_sysctl(l4proto); - return; - } - - write_lock_bh(&nf_conntrack_lock); + mutex_lock(&nf_ct_proto_mutex); BUG_ON(nf_ct_protos[l4proto->l3proto][l4proto->l4proto] != l4proto); rcu_assign_pointer(nf_ct_protos[l4proto->l3proto][l4proto->l4proto], &nf_conntrack_l4proto_generic); - write_unlock_bh(&nf_conntrack_lock); - synchronize_rcu(); - nf_ct_l4proto_unregister_sysctl(l4proto); + mutex_unlock(&nf_ct_proto_mutex); + + synchronize_rcu(); /* Remove all contrack entries for this protocol */ nf_ct_iterate_cleanup(kill_l4proto, l4proto); } EXPORT_SYMBOL_GPL(nf_conntrack_l4proto_unregister); + +int nf_conntrack_proto_init(void) +{ + unsigned int i; + int err; + + err = nf_ct_l4proto_register_sysctl(&nf_conntrack_l4proto_generic); + if (err < 0) + return err; + + for (i = 0; i < AF_MAX; i++) + rcu_assign_pointer(nf_ct_l3protos[i], + &nf_conntrack_l3proto_generic); + return 0; +} + +void nf_conntrack_proto_fini(void) +{ + unsigned int i; + + nf_ct_l4proto_unregister_sysctl(&nf_conntrack_l4proto_generic); + + /* free l3proto protocol tables */ + for (i = 0; i < PF_MAX; i++) + kfree(nf_ct_protos[i]); +} diff -puN net/netfilter/nf_conntrack_proto_generic.c~git-net net/netfilter/nf_conntrack_proto_generic.c --- a/net/netfilter/nf_conntrack_proto_generic.c~git-net +++ a/net/netfilter/nf_conntrack_proto_generic.c @@ -4,11 +4,6 @@ * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as * published by the Free Software Foundation. - * - * 16 Dec 2003: Yasuyuki Kozakai @USAGI - * - enable working with L3 protocol independent connection tracking. - * - * Derived from net/ipv4/netfilter/ip_conntrack_proto_generic.c */ #include diff -puN net/netfilter/nf_conntrack_proto_sctp.c~git-net net/netfilter/nf_conntrack_proto_sctp.c --- a/net/netfilter/nf_conntrack_proto_sctp.c~git-net +++ a/net/netfilter/nf_conntrack_proto_sctp.c @@ -7,15 +7,6 @@ * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as * published by the Free Software Foundation. - * - * 17 Oct 2004: Yasuyuki Kozakai @USAGI - * - enable working with L3 protocol independent connection tracking. - * - * Derived from net/ipv4/ip_conntrack_sctp.c - */ - -/* - * Added support for proc manipulation of timeouts. */ #include diff -puN net/netfilter/nf_conntrack_proto_tcp.c~git-net net/netfilter/nf_conntrack_proto_tcp.c --- a/net/netfilter/nf_conntrack_proto_tcp.c~git-net +++ a/net/netfilter/nf_conntrack_proto_tcp.c @@ -4,24 +4,6 @@ * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as * published by the Free Software Foundation. - * - * Jozsef Kadlecsik : - * - Real stateful connection tracking - * - Modified state transitions table - * - Window scaling support added - * - SACK support added - * - * Willy Tarreau: - * - State table bugfixes - * - More robust state changes - * - Tuning timer parameters - * - * 27 Oct 2004: Yasuyuki Kozakai @USAGI - * - genelized Layer 3 protocol part. - * - * Derived from net/ipv4/netfilter/ip_conntrack_proto_tcp.c - * - * version 2.2 */ #include @@ -470,11 +452,10 @@ static void tcp_sack(const struct sk_buf /* Fast path for timestamp-only option */ if (length == TCPOLEN_TSTAMP_ALIGNED*4 - && *(__be32 *)ptr == - __constant_htonl((TCPOPT_NOP << 24) - | (TCPOPT_NOP << 16) - | (TCPOPT_TIMESTAMP << 8) - | TCPOLEN_TIMESTAMP)) + && *(__be32 *)ptr == htonl((TCPOPT_NOP << 24) + | (TCPOPT_NOP << 16) + | (TCPOPT_TIMESTAMP << 8) + | TCPOLEN_TIMESTAMP)) return; while (length > 0) { @@ -765,26 +746,18 @@ EXPORT_SYMBOL_GPL(nf_conntrack_tcp_updat #define TH_ECE 0x40 #define TH_CWR 0x80 -/* table of valid flag combinations - ECE and CWR are always valid */ -static u8 tcp_valid_flags[(TH_FIN|TH_SYN|TH_RST|TH_PUSH|TH_ACK|TH_URG) + 1] = +/* table of valid flag combinations - PUSH, ECE and CWR are always valid */ +static u8 tcp_valid_flags[(TH_FIN|TH_SYN|TH_RST|TH_ACK|TH_URG) + 1] = { [TH_SYN] = 1, - [TH_SYN|TH_PUSH] = 1, [TH_SYN|TH_URG] = 1, - [TH_SYN|TH_PUSH|TH_URG] = 1, [TH_SYN|TH_ACK] = 1, - [TH_SYN|TH_ACK|TH_PUSH] = 1, [TH_RST] = 1, [TH_RST|TH_ACK] = 1, - [TH_RST|TH_ACK|TH_PUSH] = 1, [TH_FIN|TH_ACK] = 1, + [TH_FIN|TH_ACK|TH_URG] = 1, [TH_ACK] = 1, - [TH_ACK|TH_PUSH] = 1, [TH_ACK|TH_URG] = 1, - [TH_ACK|TH_URG|TH_PUSH] = 1, - [TH_FIN|TH_ACK|TH_PUSH] = 1, - [TH_FIN|TH_ACK|TH_URG] = 1, - [TH_FIN|TH_ACK|TH_URG|TH_PUSH] = 1, }; /* Protect conntrack agaist broken packets. Code taken from ipt_unclean.c. */ @@ -831,7 +804,7 @@ static int tcp_error(struct sk_buff *skb } /* Check TCP flags. */ - tcpflags = (((u_int8_t *)th)[13] & ~(TH_ECE|TH_CWR)); + tcpflags = (((u_int8_t *)th)[13] & ~(TH_ECE|TH_CWR|TH_PUSH)); if (!tcp_valid_flags[tcpflags]) { if (LOG_INVALID(IPPROTO_TCP)) nf_log_packet(pf, 0, skb, NULL, NULL, NULL, @@ -1110,11 +1083,26 @@ static int tcp_to_nfattr(struct sk_buff const struct nf_conn *ct) { struct nfattr *nest_parms; + struct nf_ct_tcp_flags tmp = {}; read_lock_bh(&tcp_lock); nest_parms = NFA_NEST(skb, CTA_PROTOINFO_TCP); NFA_PUT(skb, CTA_PROTOINFO_TCP_STATE, sizeof(u_int8_t), &ct->proto.tcp.state); + + NFA_PUT(skb, CTA_PROTOINFO_TCP_WSCALE_ORIGINAL, sizeof(u_int8_t), + &ct->proto.tcp.seen[0].td_scale); + + NFA_PUT(skb, CTA_PROTOINFO_TCP_WSCALE_REPLY, sizeof(u_int8_t), + &ct->proto.tcp.seen[1].td_scale); + + tmp.flags = ct->proto.tcp.seen[0].flags; + NFA_PUT(skb, CTA_PROTOINFO_TCP_FLAGS_ORIGINAL, + sizeof(struct nf_ct_tcp_flags), &tmp); + + tmp.flags = ct->proto.tcp.seen[1].flags; + NFA_PUT(skb, CTA_PROTOINFO_TCP_FLAGS_REPLY, + sizeof(struct nf_ct_tcp_flags), &tmp); read_unlock_bh(&tcp_lock); NFA_NEST_END(skb, nest_parms); @@ -1127,7 +1115,11 @@ nfattr_failure: } static const size_t cta_min_tcp[CTA_PROTOINFO_TCP_MAX] = { - [CTA_PROTOINFO_TCP_STATE-1] = sizeof(u_int8_t), + [CTA_PROTOINFO_TCP_STATE-1] = sizeof(u_int8_t), + [CTA_PROTOINFO_TCP_WSCALE_ORIGINAL-1] = sizeof(u_int8_t), + [CTA_PROTOINFO_TCP_WSCALE_REPLY-1] = sizeof(u_int8_t), + [CTA_PROTOINFO_TCP_FLAGS_ORIGINAL-1] = sizeof(struct nf_ct_tcp_flags), + [CTA_PROTOINFO_TCP_FLAGS_REPLY-1] = sizeof(struct nf_ct_tcp_flags) }; static int nfattr_to_tcp(struct nfattr *cda[], struct nf_conn *ct) @@ -1151,6 +1143,30 @@ static int nfattr_to_tcp(struct nfattr * write_lock_bh(&tcp_lock); ct->proto.tcp.state = *(u_int8_t *)NFA_DATA(tb[CTA_PROTOINFO_TCP_STATE-1]); + + if (tb[CTA_PROTOINFO_TCP_FLAGS_ORIGINAL-1]) { + struct nf_ct_tcp_flags *attr = + NFA_DATA(tb[CTA_PROTOINFO_TCP_FLAGS_ORIGINAL-1]); + ct->proto.tcp.seen[0].flags &= ~attr->mask; + ct->proto.tcp.seen[0].flags |= attr->flags & attr->mask; + } + + if (tb[CTA_PROTOINFO_TCP_FLAGS_REPLY-1]) { + struct nf_ct_tcp_flags *attr = + NFA_DATA(tb[CTA_PROTOINFO_TCP_FLAGS_REPLY-1]); + ct->proto.tcp.seen[1].flags &= ~attr->mask; + ct->proto.tcp.seen[1].flags |= attr->flags & attr->mask; + } + + if (tb[CTA_PROTOINFO_TCP_WSCALE_ORIGINAL-1] && + tb[CTA_PROTOINFO_TCP_WSCALE_REPLY-1] && + ct->proto.tcp.seen[0].flags & IP_CT_TCP_FLAG_WINDOW_SCALE && + ct->proto.tcp.seen[1].flags & IP_CT_TCP_FLAG_WINDOW_SCALE) { + ct->proto.tcp.seen[0].td_scale = *(u_int8_t *) + NFA_DATA(tb[CTA_PROTOINFO_TCP_WSCALE_ORIGINAL-1]); + ct->proto.tcp.seen[1].td_scale = *(u_int8_t *) + NFA_DATA(tb[CTA_PROTOINFO_TCP_WSCALE_REPLY-1]); + } write_unlock_bh(&tcp_lock); return 0; diff -puN net/netfilter/nf_conntrack_proto_udp.c~git-net net/netfilter/nf_conntrack_proto_udp.c --- a/net/netfilter/nf_conntrack_proto_udp.c~git-net +++ a/net/netfilter/nf_conntrack_proto_udp.c @@ -4,11 +4,6 @@ * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as * published by the Free Software Foundation. - * - * 16 Dec 2003: Yasuyuki Kozakai @USAGI - * - enable working with Layer 3 protocol independent connection tracking. - * - * Derived from net/ipv4/netfilter/ip_conntrack_proto_udp.c */ #include diff -puN net/netfilter/nf_conntrack_standalone.c~git-net net/netfilter/nf_conntrack_standalone.c --- a/net/netfilter/nf_conntrack_standalone.c~git-net +++ a/net/netfilter/nf_conntrack_standalone.c @@ -1,20 +1,9 @@ -/* This file contains all the functions required for the standalone - nf_conntrack module. - - These are not required by the compatibility layer. -*/ - /* (C) 1999-2001 Paul `Rusty' Russell * (C) 2002-2004 Netfilter Core Team * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as * published by the Free Software Foundation. - * - * 16 Dec 2003: Yasuyuki Kozakai @USAGI - * - generalize L3 protocol dependent part. - * - * Derived from net/ipv4/netfilter/ip_conntrack_standalone.c */ #include diff -puN net/netfilter/nfnetlink.c~git-net net/netfilter/nfnetlink.c --- a/net/netfilter/nfnetlink.c~git-net +++ a/net/netfilter/nfnetlink.c @@ -3,7 +3,7 @@ * * (C) 2001 by Jay Schulist , * (C) 2002-2005 by Harald Welte - * (C) 2005 by Pablo Neira Ayuso + * (C) 2005,2007 by Pablo Neira Ayuso * * Initial netfilter messages via netlink development funded and * generally made possible by Network Robots, Inc. (www.networkrobots.com) @@ -28,10 +28,9 @@ #include #include #include +#include #include -#include -#include #include #include @@ -41,32 +40,34 @@ MODULE_ALIAS_NET_PF_PROTO(PF_NETLINK, NE static char __initdata nfversion[] = "0.30"; -#if 0 -#define DEBUGP(format, args...) \ - printk(KERN_DEBUG "%s(%d):%s(): " format, __FILE__, \ - __LINE__, __FUNCTION__, ## args) -#else -#define DEBUGP(format, args...) -#endif - static struct sock *nfnl = NULL; static struct nfnetlink_subsystem *subsys_table[NFNL_SUBSYS_COUNT]; -DECLARE_MUTEX(nfnl_sem); +static DEFINE_MUTEX(nfnl_mutex); -void nfnl_lock(void) +static void nfnl_lock(void) { - nfnl_shlock(); + mutex_lock(&nfnl_mutex); } -void nfnl_unlock(void) +static int nfnl_trylock(void) { - nfnl_shunlock(); + return !mutex_trylock(&nfnl_mutex); } -int nfnetlink_subsys_register(struct nfnetlink_subsystem *n) +static void __nfnl_unlock(void) { - DEBUGP("registering subsystem ID %u\n", n->subsys_id); + mutex_unlock(&nfnl_mutex); +} + +static void nfnl_unlock(void) +{ + mutex_unlock(&nfnl_mutex); + if (nfnl->sk_receive_queue.qlen) + nfnl->sk_data_ready(nfnl, 0); +} +int nfnetlink_subsys_register(struct nfnetlink_subsystem *n) +{ nfnl_lock(); if (subsys_table[n->subsys_id]) { nfnl_unlock(); @@ -77,24 +78,23 @@ int nfnetlink_subsys_register(struct nfn return 0; } +EXPORT_SYMBOL_GPL(nfnetlink_subsys_register); int nfnetlink_subsys_unregister(struct nfnetlink_subsystem *n) { - DEBUGP("unregistering subsystem ID %u\n", n->subsys_id); - nfnl_lock(); subsys_table[n->subsys_id] = NULL; nfnl_unlock(); return 0; } +EXPORT_SYMBOL_GPL(nfnetlink_subsys_unregister); static inline struct nfnetlink_subsystem *nfnetlink_get_subsys(u_int16_t type) { u_int8_t subsys_id = NFNL_SUBSYS_ID(type); - if (subsys_id >= NFNL_SUBSYS_COUNT - || subsys_table[subsys_id] == NULL) + if (subsys_id >= NFNL_SUBSYS_COUNT) return NULL; return subsys_table[subsys_id]; @@ -105,10 +105,8 @@ nfnetlink_find_client(u_int16_t type, st { u_int8_t cb_id = NFNL_MSG_TYPE(type); - if (cb_id >= ss->cb_count) { - DEBUGP("msgtype %u >= %u, returning\n", type, ss->cb_count); + if (cb_id >= ss->cb_count) return NULL; - } return &ss->cb[cb_id]; } @@ -125,6 +123,7 @@ void __nfa_fill(struct sk_buff *skb, int memcpy(NFA_DATA(nfa), data, attrlen); memset(NFA_DATA(nfa) + attrlen, 0, NFA_ALIGN(size) - size); } +EXPORT_SYMBOL_GPL(__nfa_fill); void nfattr_parse(struct nfattr *tb[], int maxattr, struct nfattr *nfa, int len) { @@ -137,6 +136,7 @@ void nfattr_parse(struct nfattr *tb[], i nfa = NFA_NEXT(nfa, len); } } +EXPORT_SYMBOL_GPL(nfattr_parse); /** * nfnetlink_check_attributes - check and parse nfnetlink attributes @@ -150,37 +150,15 @@ static int nfnetlink_check_attributes(struct nfnetlink_subsystem *subsys, struct nlmsghdr *nlh, struct nfattr *cda[]) { - int min_len; - u_int16_t attr_count; + int min_len = NLMSG_SPACE(sizeof(struct nfgenmsg)); u_int8_t cb_id = NFNL_MSG_TYPE(nlh->nlmsg_type); - - if (unlikely(cb_id >= subsys->cb_count)) { - DEBUGP("msgtype %u >= %u, returning\n", - cb_id, subsys->cb_count); - return -EINVAL; - } - - min_len = NLMSG_SPACE(sizeof(struct nfgenmsg)); - if (unlikely(nlh->nlmsg_len < min_len)) - return -EINVAL; - - attr_count = subsys->cb[cb_id].attr_count; - memset(cda, 0, sizeof(struct nfattr *) * attr_count); + u_int16_t attr_count = subsys->cb[cb_id].attr_count; /* check attribute lengths. */ if (likely(nlh->nlmsg_len > min_len)) { struct nfattr *attr = NFM_NFA(NLMSG_DATA(nlh)); int attrlen = nlh->nlmsg_len - NLMSG_ALIGN(min_len); - - while (NFA_OK(attr, attrlen)) { - unsigned flavor = NFA_TYPE(attr); - if (flavor) { - if (flavor > attr_count) - return -EINVAL; - cda[flavor - 1] = attr; - } - attr = NFA_NEXT(attr, attrlen); - } + nfattr_parse(cda, attr_count, attr, attrlen); } /* implicit: if nlmsg_len == min_len, we return 0, and an empty @@ -208,62 +186,46 @@ int nfnetlink_send(struct sk_buff *skb, return err; } +EXPORT_SYMBOL_GPL(nfnetlink_send); int nfnetlink_unicast(struct sk_buff *skb, u_int32_t pid, int flags) { return netlink_unicast(nfnl, skb, pid, flags); } +EXPORT_SYMBOL_GPL(nfnetlink_unicast); /* Process one complete nfnetlink message. */ -static int nfnetlink_rcv_msg(struct sk_buff *skb, - struct nlmsghdr *nlh, int *errp) +static int nfnetlink_rcv_msg(struct sk_buff *skb, struct nlmsghdr *nlh) { struct nfnl_callback *nc; struct nfnetlink_subsystem *ss; - int type, err = 0; + int type, err; - DEBUGP("entered; subsys=%u, msgtype=%u\n", - NFNL_SUBSYS_ID(nlh->nlmsg_type), - NFNL_MSG_TYPE(nlh->nlmsg_type)); - - if (security_netlink_recv(skb, CAP_NET_ADMIN)) { - DEBUGP("missing CAP_NET_ADMIN\n"); - *errp = -EPERM; - return -1; - } - - /* Only requests are handled by kernel now. */ - if (!(nlh->nlmsg_flags & NLM_F_REQUEST)) { - DEBUGP("received non-request message\n"); - return 0; - } + if (security_netlink_recv(skb, CAP_NET_ADMIN)) + return -EPERM; /* All the messages must at least contain nfgenmsg */ - if (nlh->nlmsg_len < NLMSG_SPACE(sizeof(struct nfgenmsg))) { - DEBUGP("received message was too short\n"); + if (nlh->nlmsg_len < NLMSG_SPACE(sizeof(struct nfgenmsg))) return 0; - } type = nlh->nlmsg_type; ss = nfnetlink_get_subsys(type); if (!ss) { #ifdef CONFIG_KMOD - /* don't call nfnl_shunlock, since it would reenter + /* don't call nfnl_unlock, since it would reenter * with further packet processing */ - up(&nfnl_sem); + __nfnl_unlock(); request_module("nfnetlink-subsys-%d", NFNL_SUBSYS_ID(type)); - nfnl_shlock(); + nfnl_lock(); ss = nfnetlink_get_subsys(type); if (!ss) #endif - goto err_inval; + return -EINVAL; } nc = nfnetlink_find_client(type, ss); - if (!nc) { - DEBUGP("unable to find client for type %d\n", type); - goto err_inval; - } + if (!nc) + return -EINVAL; { u_int16_t attr_count = @@ -274,73 +236,21 @@ static int nfnetlink_rcv_msg(struct sk_b err = nfnetlink_check_attributes(ss, nlh, cda); if (err < 0) - goto err_inval; - - DEBUGP("calling handler\n"); - err = nc->call(nfnl, skb, nlh, cda, errp); - *errp = err; - return err; - } - -err_inval: - DEBUGP("returning -EINVAL\n"); - *errp = -EINVAL; - return -1; -} - -/* Process one packet of messages. */ -static inline int nfnetlink_rcv_skb(struct sk_buff *skb) -{ - int err; - struct nlmsghdr *nlh; - - while (skb->len >= NLMSG_SPACE(0)) { - u32 rlen; - - nlh = (struct nlmsghdr *)skb->data; - if (nlh->nlmsg_len < sizeof(struct nlmsghdr) - || skb->len < nlh->nlmsg_len) - return 0; - rlen = NLMSG_ALIGN(nlh->nlmsg_len); - if (rlen > skb->len) - rlen = skb->len; - if (nfnetlink_rcv_msg(skb, nlh, &err)) { - if (!err) - return -1; - netlink_ack(skb, nlh, err); - } else - if (nlh->nlmsg_flags & NLM_F_ACK) - netlink_ack(skb, nlh, 0); - skb_pull(skb, rlen); + return err; + return nc->call(nfnl, skb, nlh, cda); } - - return 0; } static void nfnetlink_rcv(struct sock *sk, int len) { - do { - struct sk_buff *skb; + unsigned int qlen = 0; - if (nfnl_shlock_nowait()) + do { + if (nfnl_trylock()) return; - - while ((skb = skb_dequeue(&sk->sk_receive_queue)) != NULL) { - if (nfnetlink_rcv_skb(skb)) { - if (skb->len) - skb_queue_head(&sk->sk_receive_queue, - skb); - else - kfree_skb(skb); - break; - } - kfree_skb(skb); - } - - /* don't call nfnl_shunlock, since it would reenter - * with further packet processing */ - up(&nfnl_sem); - } while(nfnl && nfnl->sk_receive_queue.qlen); + netlink_run_queue(sk, &qlen, nfnetlink_rcv_msg); + __nfnl_unlock(); + } while (qlen); } static void __exit nfnetlink_exit(void) @@ -355,7 +265,7 @@ static int __init nfnetlink_init(void) printk("Netfilter messages via NETLINK v%s.\n", nfversion); nfnl = netlink_kernel_create(NETLINK_NETFILTER, NFNLGRP_MAX, - nfnetlink_rcv, THIS_MODULE); + nfnetlink_rcv, NULL, THIS_MODULE); if (!nfnl) { printk(KERN_ERR "cannot initialize nfnetlink!\n"); return -1; @@ -366,10 +276,3 @@ static int __init nfnetlink_init(void) module_init(nfnetlink_init); module_exit(nfnetlink_exit); - -EXPORT_SYMBOL_GPL(nfnetlink_subsys_register); -EXPORT_SYMBOL_GPL(nfnetlink_subsys_unregister); -EXPORT_SYMBOL_GPL(nfnetlink_send); -EXPORT_SYMBOL_GPL(nfnetlink_unicast); -EXPORT_SYMBOL_GPL(nfattr_parse); -EXPORT_SYMBOL_GPL(__nfa_fill); diff -puN net/netfilter/nfnetlink_log.c~git-net net/netfilter/nfnetlink_log.c --- a/net/netfilter/nfnetlink_log.c~git-net +++ a/net/netfilter/nfnetlink_log.c @@ -10,11 +10,6 @@ * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as * published by the Free Software Foundation. - * - * 2006-01-26 Harald Welte - * - Add optional local and global sequence number to detect lost - * events from userspace - * */ #include #include @@ -163,10 +158,7 @@ instance_create(u_int16_t group_num, int /* needs to be two, since we _put() after creation */ atomic_set(&inst->use, 2); - init_timer(&inst->timer); - inst->timer.function = nfulnl_timer; - inst->timer.data = (unsigned long)inst; - /* don't start timer yet. (re)start it with every packet */ + setup_timer(&inst->timer, nfulnl_timer, (unsigned long)inst); inst->peer_pid = pid; inst->group_num = group_num; @@ -200,20 +192,14 @@ out_unlock: static int __nfulnl_send(struct nfulnl_instance *inst); static void -_instance_destroy2(struct nfulnl_instance *inst, int lock) +__instance_destroy(struct nfulnl_instance *inst) { /* first pull it out of the global list */ - if (lock) - write_lock_bh(&instances_lock); - UDEBUG("removing instance %p (queuenum=%u) from hash\n", inst, inst->group_num); hlist_del(&inst->hlist); - if (lock) - write_unlock_bh(&instances_lock); - /* then flush all pending packets from skb */ spin_lock_bh(&inst->lock); @@ -235,15 +221,11 @@ _instance_destroy2(struct nfulnl_instanc } static inline void -__instance_destroy(struct nfulnl_instance *inst) -{ - _instance_destroy2(inst, 0); -} - -static inline void instance_destroy(struct nfulnl_instance *inst) { - _instance_destroy2(inst, 1); + write_lock_bh(&instances_lock); + __instance_destroy(inst); + write_unlock_bh(&instances_lock); } static int @@ -365,9 +347,6 @@ __nfulnl_send(struct nfulnl_instance *in { int status; - if (!inst->skb) - return 0; - if (inst->qlen > 1) inst->lastnlh->nlmsg_type = NLMSG_DONE; @@ -391,7 +370,8 @@ static void nfulnl_timer(unsigned long d UDEBUG("timer function called, flushing buffer\n"); spin_lock_bh(&inst->lock); - __nfulnl_send(inst); + if (inst->skb) + __nfulnl_send(inst); spin_unlock_bh(&inst->lock); instance_put(inst); } @@ -409,15 +389,14 @@ __build_packet_message(struct nfulnl_ins const struct nf_loginfo *li, const char *prefix, unsigned int plen) { - unsigned char *old_tail; struct nfulnl_msg_packet_hdr pmsg; struct nlmsghdr *nlh; struct nfgenmsg *nfmsg; __be32 tmp_uint; + sk_buff_data_t old_tail = inst->skb->tail; UDEBUG("entered\n"); - old_tail = inst->skb->tail; nlh = NLMSG_PUT(inst->skb, 0, 0, NFNL_SUBSYS_ULOG << 8 | NFULNL_MSG_PACKET, sizeof(struct nfgenmsg)); @@ -509,11 +488,11 @@ __build_packet_message(struct nfulnl_ins NFA_PUT(inst->skb, NFULA_HWADDR, sizeof(phw), &phw); } - if (skb->tstamp.off_sec) { + if (skb->tstamp.tv64) { struct nfulnl_msg_packet_timestamp ts; - - ts.sec = cpu_to_be64(skb->tstamp.off_sec); - ts.usec = cpu_to_be64(skb->tstamp.off_usec); + struct timeval tv = ktime_to_timeval(skb->tstamp); + ts.sec = cpu_to_be64(tv.tv_sec); + ts.usec = cpu_to_be64(tv.tv_usec); NFA_PUT(inst->skb, NFULA_TIMESTAMP, sizeof(ts), &ts); } @@ -596,7 +575,6 @@ nfulnl_log_packet(unsigned int pf, struct nfulnl_instance *inst; const struct nf_loginfo *li; unsigned int qthreshold; - unsigned int nlbufsiz; unsigned int plen; if (li_user && li_user->type == NF_LOG_TYPE_ULOG) @@ -606,12 +584,7 @@ nfulnl_log_packet(unsigned int pf, inst = instance_lookup_get(li->u.ulog.group); if (!inst) - inst = instance_lookup_get(0); - if (!inst) { - PRINTR("nfnetlink_log: trying to log packet, " - "but no instance for group %u\n", li->u.ulog.group); return; - } plen = 0; if (prefix) @@ -667,24 +640,11 @@ nfulnl_log_packet(unsigned int pf, break; default: - spin_unlock_bh(&inst->lock); - instance_put(inst); - return; + goto unlock_and_release; } - if (size > inst->nlbufsiz) - nlbufsiz = size; - else - nlbufsiz = inst->nlbufsiz; - - if (!inst->skb) { - if (!(inst->skb = nfulnl_alloc_skb(nlbufsiz, size))) { - UDEBUG("error in nfulnl_alloc_skb(%u, %u)\n", - inst->nlbufsiz, size); - goto alloc_failure; - } - } else if (inst->qlen >= qthreshold || - size > skb_tailroom(inst->skb)) { + if (inst->qlen >= qthreshold || + (inst->skb && size > skb_tailroom(inst->skb))) { /* either the queue len is too high or we don't have * enough room in the skb left. flush to userspace. */ UDEBUG("flushing old skb\n"); @@ -693,12 +653,12 @@ nfulnl_log_packet(unsigned int pf, if (del_timer(&inst->timer)) instance_put(inst); __nfulnl_send(inst); + } - if (!(inst->skb = nfulnl_alloc_skb(nlbufsiz, size))) { - UDEBUG("error in nfulnl_alloc_skb(%u, %u)\n", - inst->nlbufsiz, size); + if (!inst->skb) { + inst->skb = nfulnl_alloc_skb(inst->nlbufsiz, size); + if (!inst->skb) goto alloc_failure; - } } UDEBUG("qlen %d, qthreshold %d\n", inst->qlen, qthreshold); @@ -760,7 +720,7 @@ static struct notifier_block nfulnl_rtnl static int nfulnl_recv_unsupp(struct sock *ctnl, struct sk_buff *skb, - struct nlmsghdr *nlh, struct nfattr *nfqa[], int *errp) + struct nlmsghdr *nlh, struct nfattr *nfqa[]) { return -ENOTSUPP; } @@ -798,7 +758,7 @@ static const int nfula_cfg_min[NFULA_CFG static int nfulnl_recv_config(struct sock *ctnl, struct sk_buff *skb, - struct nlmsghdr *nlh, struct nfattr *nfula[], int *errp) + struct nlmsghdr *nlh, struct nfattr *nfula[]) { struct nfgenmsg *nfmsg = NLMSG_DATA(nlh); u_int16_t group_num = ntohs(nfmsg->res_id); @@ -830,13 +790,13 @@ nfulnl_recv_config(struct sock *ctnl, st NETLINK_CB(skb).pid); if (!inst) { ret = -EINVAL; - goto out_put; + goto out; } break; case NFULNL_CFG_CMD_UNBIND: if (!inst) { ret = -ENODEV; - goto out_put; + goto out; } if (inst->peer_pid != NETLINK_CB(skb).pid) { @@ -845,7 +805,7 @@ nfulnl_recv_config(struct sock *ctnl, st } instance_destroy(inst); - break; + goto out; case NFULNL_CFG_CMD_PF_BIND: UDEBUG("registering log handler for pf=%u\n", pf); ret = nf_log_register(pf, &nfulnl_logger); @@ -869,7 +829,7 @@ nfulnl_recv_config(struct sock *ctnl, st "group=%u pid=%u =>ENOENT\n", group_num, NETLINK_CB(skb).pid); ret = -ENOENT; - goto out_put; + goto out; } if (inst->peer_pid != NETLINK_CB(skb).pid) { @@ -939,10 +899,8 @@ struct iter_state { unsigned int bucket; }; -static struct hlist_node *get_first(struct seq_file *seq) +static struct hlist_node *get_first(struct iter_state *st) { - struct iter_state *st = seq->private; - if (!st) return NULL; @@ -953,10 +911,8 @@ static struct hlist_node *get_first(stru return NULL; } -static struct hlist_node *get_next(struct seq_file *seq, struct hlist_node *h) +static struct hlist_node *get_next(struct iter_state *st, struct hlist_node *h) { - struct iter_state *st = seq->private; - h = h->next; while (!h) { if (++st->bucket >= INSTANCE_BUCKETS) @@ -967,13 +923,13 @@ static struct hlist_node *get_next(struc return h; } -static struct hlist_node *get_idx(struct seq_file *seq, loff_t pos) +static struct hlist_node *get_idx(struct iter_state *st, loff_t pos) { struct hlist_node *head; - head = get_first(seq); + head = get_first(st); if (head) - while (pos && (head = get_next(seq, head))) + while (pos && (head = get_next(st, head))) pos--; return pos ? NULL : head; } @@ -981,13 +937,13 @@ static struct hlist_node *get_idx(struct static void *seq_start(struct seq_file *seq, loff_t *pos) { read_lock_bh(&instances_lock); - return get_idx(seq, *pos); + return get_idx(seq->private, *pos); } static void *seq_next(struct seq_file *s, void *v, loff_t *pos) { (*pos)++; - return get_next(s, v); + return get_next(s->private, v); } static void seq_stop(struct seq_file *s, void *v) diff -puN net/netfilter/nfnetlink_queue.c~git-net net/netfilter/nfnetlink_queue.c --- a/net/netfilter/nfnetlink_queue.c~git-net +++ a/net/netfilter/nfnetlink_queue.c @@ -338,7 +338,7 @@ static struct sk_buff * nfqnl_build_packet_message(struct nfqnl_instance *queue, struct nfqnl_queue_entry *entry, int *errp) { - unsigned char *old_tail; + sk_buff_data_t old_tail; size_t size; size_t data_len = 0; struct sk_buff *skb; @@ -404,7 +404,7 @@ nfqnl_build_packet_message(struct nfqnl_ if (!skb) goto nlmsg_failure; - old_tail= skb->tail; + old_tail = skb->tail; nlh = NLMSG_PUT(skb, 0, 0, NFNL_SUBSYS_QUEUE << 8 | NFQNL_MSG_PACKET, sizeof(struct nfgenmsg)); @@ -495,11 +495,11 @@ nfqnl_build_packet_message(struct nfqnl_ NFA_PUT(skb, NFQA_HWADDR, sizeof(phw), &phw); } - if (entskb->tstamp.off_sec) { + if (entskb->tstamp.tv64) { struct nfqnl_msg_packet_timestamp ts; - - ts.sec = cpu_to_be64(entskb->tstamp.off_sec); - ts.usec = cpu_to_be64(entskb->tstamp.off_usec); + struct timeval tv = ktime_to_timeval(entskb->tstamp); + ts.sec = cpu_to_be64(tv.tv_sec); + ts.usec = cpu_to_be64(tv.tv_usec); NFA_PUT(skb, NFQA_TIMESTAMP, sizeof(ts), &ts); } @@ -648,7 +648,7 @@ nfqnl_mangle(void *data, int data_len, s } if (!skb_make_writable(&e->skb, data_len)) return -ENOMEM; - memcpy(e->skb->data, data, data_len); + skb_copy_to_linear_data(e->skb, data, data_len); e->skb->ip_summed = CHECKSUM_NONE; return 0; } @@ -783,7 +783,7 @@ static const int nfqa_verdict_min[NFQA_M static int nfqnl_recv_verdict(struct sock *ctnl, struct sk_buff *skb, - struct nlmsghdr *nlh, struct nfattr *nfqa[], int *errp) + struct nlmsghdr *nlh, struct nfattr *nfqa[]) { struct nfgenmsg *nfmsg = NLMSG_DATA(nlh); u_int16_t queue_num = ntohs(nfmsg->res_id); @@ -848,7 +848,7 @@ err_out_put: static int nfqnl_recv_unsupp(struct sock *ctnl, struct sk_buff *skb, - struct nlmsghdr *nlh, struct nfattr *nfqa[], int *errp) + struct nlmsghdr *nlh, struct nfattr *nfqa[]) { return -ENOTSUPP; } @@ -865,7 +865,7 @@ static struct nf_queue_handler nfqh = { static int nfqnl_recv_config(struct sock *ctnl, struct sk_buff *skb, - struct nlmsghdr *nlh, struct nfattr *nfqa[], int *errp) + struct nlmsghdr *nlh, struct nfattr *nfqa[]) { struct nfgenmsg *nfmsg = NLMSG_DATA(nlh); u_int16_t queue_num = ntohs(nfmsg->res_id); diff -puN net/netfilter/x_tables.c~git-net net/netfilter/x_tables.c --- a/net/netfilter/x_tables.c~git-net +++ a/net/netfilter/x_tables.c @@ -56,8 +56,8 @@ enum { }; static const char *xt_prefix[NPROTO] = { - [AF_INET] = "ip", - [AF_INET6] = "ip6", + [AF_INET] = "ip", + [AF_INET6] = "ip6", [NF_ARP] = "arp", }; @@ -651,12 +651,6 @@ void *xt_unregister_table(struct xt_tabl EXPORT_SYMBOL_GPL(xt_unregister_table); #ifdef CONFIG_PROC_FS -static char *xt_proto_prefix[NPROTO] = { - [AF_INET] = "ip", - [AF_INET6] = "ip6", - [NF_ARP] = "arp", -}; - static struct list_head *xt_get_idx(struct list_head *list, struct seq_file *seq, loff_t pos) { struct list_head *head = list->next; @@ -798,7 +792,7 @@ int xt_proto_init(int af) #ifdef CONFIG_PROC_FS - strlcpy(buf, xt_proto_prefix[af], sizeof(buf)); + strlcpy(buf, xt_prefix[af], sizeof(buf)); strlcat(buf, FORMAT_TABLES, sizeof(buf)); proc = proc_net_fops_create(buf, 0440, &xt_file_ops); if (!proc) @@ -806,14 +800,14 @@ int xt_proto_init(int af) proc->data = (void *) ((unsigned long) af | (TABLE << 16)); - strlcpy(buf, xt_proto_prefix[af], sizeof(buf)); + strlcpy(buf, xt_prefix[af], sizeof(buf)); strlcat(buf, FORMAT_MATCHES, sizeof(buf)); proc = proc_net_fops_create(buf, 0440, &xt_file_ops); if (!proc) goto out_remove_tables; proc->data = (void *) ((unsigned long) af | (MATCH << 16)); - strlcpy(buf, xt_proto_prefix[af], sizeof(buf)); + strlcpy(buf, xt_prefix[af], sizeof(buf)); strlcat(buf, FORMAT_TARGETS, sizeof(buf)); proc = proc_net_fops_create(buf, 0440, &xt_file_ops); if (!proc) @@ -825,12 +819,12 @@ int xt_proto_init(int af) #ifdef CONFIG_PROC_FS out_remove_matches: - strlcpy(buf, xt_proto_prefix[af], sizeof(buf)); + strlcpy(buf, xt_prefix[af], sizeof(buf)); strlcat(buf, FORMAT_MATCHES, sizeof(buf)); proc_net_remove(buf); out_remove_tables: - strlcpy(buf, xt_proto_prefix[af], sizeof(buf)); + strlcpy(buf, xt_prefix[af], sizeof(buf)); strlcat(buf, FORMAT_TABLES, sizeof(buf)); proc_net_remove(buf); out: @@ -844,15 +838,15 @@ void xt_proto_fini(int af) #ifdef CONFIG_PROC_FS char buf[XT_FUNCTION_MAXNAMELEN]; - strlcpy(buf, xt_proto_prefix[af], sizeof(buf)); + strlcpy(buf, xt_prefix[af], sizeof(buf)); strlcat(buf, FORMAT_TABLES, sizeof(buf)); proc_net_remove(buf); - strlcpy(buf, xt_proto_prefix[af], sizeof(buf)); + strlcpy(buf, xt_prefix[af], sizeof(buf)); strlcat(buf, FORMAT_TARGETS, sizeof(buf)); proc_net_remove(buf); - strlcpy(buf, xt_proto_prefix[af], sizeof(buf)); + strlcpy(buf, xt_prefix[af], sizeof(buf)); strlcat(buf, FORMAT_MATCHES, sizeof(buf)); proc_net_remove(buf); #endif /*CONFIG_PROC_FS*/ diff -puN net/netfilter/xt_CONNMARK.c~git-net net/netfilter/xt_CONNMARK.c --- a/net/netfilter/xt_CONNMARK.c~git-net +++ a/net/netfilter/xt_CONNMARK.c @@ -30,10 +30,7 @@ MODULE_ALIAS("ipt_CONNMARK"); #include #include -#include -#if defined(CONFIG_NF_CONNTRACK) || defined(CONFIG_NF_CONNTRACK_MODULE) #include -#endif static unsigned int target(struct sk_buff **pskb, @@ -44,40 +41,33 @@ target(struct sk_buff **pskb, const void *targinfo) { const struct xt_connmark_target_info *markinfo = targinfo; + struct nf_conn *ct; + enum ip_conntrack_info ctinfo; u_int32_t diff; u_int32_t mark; u_int32_t newmark; - u_int32_t ctinfo; - u_int32_t *ctmark = nf_ct_get_mark(*pskb, &ctinfo); - if (ctmark) { + ct = nf_ct_get(*pskb, &ctinfo); + if (ct) { switch(markinfo->mode) { case XT_CONNMARK_SET: - newmark = (*ctmark & ~markinfo->mask) | markinfo->mark; - if (newmark != *ctmark) { - *ctmark = newmark; -#if defined(CONFIG_IP_NF_CONNTRACK) || defined(CONFIG_IP_NF_CONNTRACK_MODULE) - ip_conntrack_event_cache(IPCT_MARK, *pskb); -#else + newmark = (ct->mark & ~markinfo->mask) | markinfo->mark; + if (newmark != ct->mark) { + ct->mark = newmark; nf_conntrack_event_cache(IPCT_MARK, *pskb); -#endif } break; case XT_CONNMARK_SAVE: - newmark = (*ctmark & ~markinfo->mask) | + newmark = (ct->mark & ~markinfo->mask) | ((*pskb)->mark & markinfo->mask); - if (*ctmark != newmark) { - *ctmark = newmark; -#if defined(CONFIG_IP_NF_CONNTRACK) || defined(CONFIG_IP_NF_CONNTRACK_MODULE) - ip_conntrack_event_cache(IPCT_MARK, *pskb); -#else + if (ct->mark != newmark) { + ct->mark = newmark; nf_conntrack_event_cache(IPCT_MARK, *pskb); -#endif } break; case XT_CONNMARK_RESTORE: mark = (*pskb)->mark; - diff = (*ctmark ^ mark) & markinfo->mask; + diff = (ct->mark ^ mark) & markinfo->mask; (*pskb)->mark = mark ^ diff; break; } diff -puN net/netfilter/xt_CONNSECMARK.c~git-net net/netfilter/xt_CONNSECMARK.c --- a/net/netfilter/xt_CONNSECMARK.c~git-net +++ a/net/netfilter/xt_CONNSECMARK.c @@ -19,7 +19,7 @@ #include #include #include -#include +#include #define PFX "CONNSECMARK: " @@ -36,12 +36,12 @@ MODULE_ALIAS("ip6t_CONNSECMARK"); static void secmark_save(struct sk_buff *skb) { if (skb->secmark) { - u32 *connsecmark; + struct nf_conn *ct; enum ip_conntrack_info ctinfo; - connsecmark = nf_ct_get_secmark(skb, &ctinfo); - if (connsecmark && !*connsecmark) - *connsecmark = skb->secmark; + ct = nf_ct_get(skb, &ctinfo); + if (ct && !ct->secmark) + ct->secmark = skb->secmark; } } @@ -52,12 +52,12 @@ static void secmark_save(struct sk_buff static void secmark_restore(struct sk_buff *skb) { if (!skb->secmark) { - u32 *connsecmark; + struct nf_conn *ct; enum ip_conntrack_info ctinfo; - connsecmark = nf_ct_get_secmark(skb, &ctinfo); - if (connsecmark && *connsecmark) - skb->secmark = *connsecmark; + ct = nf_ct_get(skb, &ctinfo); + if (ct && ct->secmark) + skb->secmark = ct->secmark; } } diff -puN net/netfilter/xt_DSCP.c~git-net net/netfilter/xt_DSCP.c --- a/net/netfilter/xt_DSCP.c~git-net +++ a/net/netfilter/xt_DSCP.c @@ -8,8 +8,6 @@ * published by the Free Software Foundation. * * See RFC2474 for a description of the DSCP field within the IP Header. - * - * xt_DSCP.c,v 1.8 2002/08/06 18:41:57 laforge Exp */ #include @@ -35,13 +33,13 @@ static unsigned int target(struct sk_buf const void *targinfo) { const struct xt_DSCP_info *dinfo = targinfo; - u_int8_t dscp = ipv4_get_dsfield((*pskb)->nh.iph) >> XT_DSCP_SHIFT; + u_int8_t dscp = ipv4_get_dsfield(ip_hdr(*pskb)) >> XT_DSCP_SHIFT; if (dscp != dinfo->dscp) { if (!skb_make_writable(pskb, sizeof(struct iphdr))) return NF_DROP; - ipv4_change_dsfield((*pskb)->nh.iph, (__u8)(~XT_DSCP_MASK), + ipv4_change_dsfield(ip_hdr(*pskb), (__u8)(~XT_DSCP_MASK), dinfo->dscp << XT_DSCP_SHIFT); } @@ -56,13 +54,13 @@ static unsigned int target6(struct sk_bu const void *targinfo) { const struct xt_DSCP_info *dinfo = targinfo; - u_int8_t dscp = ipv6_get_dsfield((*pskb)->nh.ipv6h) >> XT_DSCP_SHIFT; + u_int8_t dscp = ipv6_get_dsfield(ipv6_hdr(*pskb)) >> XT_DSCP_SHIFT; if (dscp != dinfo->dscp) { if (!skb_make_writable(pskb, sizeof(struct ipv6hdr))) return NF_DROP; - ipv6_change_dsfield((*pskb)->nh.ipv6h, (__u8)(~XT_DSCP_MASK), + ipv6_change_dsfield(ipv6_hdr(*pskb), (__u8)(~XT_DSCP_MASK), dinfo->dscp << XT_DSCP_SHIFT); } return XT_CONTINUE; diff -puN net/netfilter/xt_NOTRACK.c~git-net net/netfilter/xt_NOTRACK.c --- a/net/netfilter/xt_NOTRACK.c~git-net +++ a/net/netfilter/xt_NOTRACK.c @@ -5,7 +5,7 @@ #include #include -#include +#include MODULE_LICENSE("GPL"); MODULE_ALIAS("ipt_NOTRACK"); @@ -26,7 +26,7 @@ target(struct sk_buff **pskb, If there is a real ct entry correspondig to this packet, it'll hang aroun till timing out. We don't deal with it for performance reasons. JK */ - nf_ct_untrack(*pskb); + (*pskb)->nfct = &nf_conntrack_untracked.ct_general; (*pskb)->nfctinfo = IP_CT_NEW; nf_conntrack_get((*pskb)->nfct); diff -puN net/netfilter/xt_TCPMSS.c~git-net net/netfilter/xt_TCPMSS.c --- a/net/netfilter/xt_TCPMSS.c~git-net +++ a/net/netfilter/xt_TCPMSS.c @@ -54,7 +54,7 @@ tcpmss_mangle_packet(struct sk_buff **ps return -1; tcplen = (*pskb)->len - tcphoff; - tcph = (struct tcphdr *)((*pskb)->nh.raw + tcphoff); + tcph = (struct tcphdr *)(skb_network_header(*pskb) + tcphoff); /* Since it passed flags test in tcp match, we know it is is not a fragment, and has data >= tcp header length. SYN @@ -113,7 +113,7 @@ tcpmss_mangle_packet(struct sk_buff **ps return -1; kfree_skb(*pskb); *pskb = newskb; - tcph = (struct tcphdr *)((*pskb)->nh.raw + tcphoff); + tcph = (struct tcphdr *)(skb_network_header(*pskb) + tcphoff); } skb_put((*pskb), TCPOLEN_MSS); @@ -145,7 +145,7 @@ xt_tcpmss_target4(struct sk_buff **pskb, const struct xt_target *target, const void *targinfo) { - struct iphdr *iph = (*pskb)->nh.iph; + struct iphdr *iph = ip_hdr(*pskb); __be16 newlen; int ret; @@ -154,7 +154,7 @@ xt_tcpmss_target4(struct sk_buff **pskb, if (ret < 0) return NF_DROP; if (ret > 0) { - iph = (*pskb)->nh.iph; + iph = ip_hdr(*pskb); newlen = htons(ntohs(iph->tot_len) + ret); nf_csum_replace2(&iph->check, iph->tot_len, newlen); iph->tot_len = newlen; @@ -171,7 +171,7 @@ xt_tcpmss_target6(struct sk_buff **pskb, const struct xt_target *target, const void *targinfo) { - struct ipv6hdr *ipv6h = (*pskb)->nh.ipv6h; + struct ipv6hdr *ipv6h = ipv6_hdr(*pskb); u8 nexthdr; int tcphoff; int ret; @@ -187,7 +187,7 @@ xt_tcpmss_target6(struct sk_buff **pskb, if (ret < 0) return NF_DROP; if (ret > 0) { - ipv6h = (*pskb)->nh.ipv6h; + ipv6h = ipv6_hdr(*pskb); ipv6h->payload_len = htons(ntohs(ipv6h->payload_len) + ret); } return XT_CONTINUE; diff -puN net/netfilter/xt_connbytes.c~git-net net/netfilter/xt_connbytes.c --- a/net/netfilter/xt_connbytes.c~git-net +++ a/net/netfilter/xt_connbytes.c @@ -1,20 +1,11 @@ /* Kernel module to match connection tracking byte counter. * GPL (C) 2002 Martin Devera (devik@cdi.cz). - * - * 2004-07-20 Harald Welte - * - reimplemented to use per-connection accounting counters - * - add functionality to match number of packets - * - add functionality to match average packet size - * - add support to match directions seperately - * 2005-10-16 Harald Welte - * - Port to x_tables - * */ #include #include -#include #include #include +#include #include #include @@ -24,22 +15,6 @@ MODULE_AUTHOR("Harald Welte 0xffffffffULL) { - unsigned int shift = fls(divisor >> 32); - - d = divisor >> shift; - dividend >>= shift; - } - - do_div(dividend, d); - return dividend; -} - static int match(const struct sk_buff *skb, const struct net_device *in, @@ -51,13 +26,17 @@ match(const struct sk_buff *skb, int *hotdrop) { const struct xt_connbytes_info *sinfo = matchinfo; + struct nf_conn *ct; + enum ip_conntrack_info ctinfo; u_int64_t what = 0; /* initialize to make gcc happy */ u_int64_t bytes = 0; u_int64_t pkts = 0; const struct ip_conntrack_counter *counters; - if (!(counters = nf_ct_get_counters(skb))) - return 0; /* no match */ + ct = nf_ct_get(skb, &ctinfo); + if (!ct) + return 0; + counters = ct->counters; switch (sinfo->what) { case XT_CONNBYTES_PKTS: diff -puN net/netfilter/xt_connmark.c~git-net net/netfilter/xt_connmark.c --- a/net/netfilter/xt_connmark.c~git-net +++ a/net/netfilter/xt_connmark.c @@ -21,16 +21,15 @@ #include #include +#include +#include +#include MODULE_AUTHOR("Henrik Nordstrom "); MODULE_DESCRIPTION("IP tables connmark match module"); MODULE_LICENSE("GPL"); MODULE_ALIAS("ipt_connmark"); -#include -#include -#include - static int match(const struct sk_buff *skb, const struct net_device *in, @@ -42,12 +41,14 @@ match(const struct sk_buff *skb, int *hotdrop) { const struct xt_connmark_info *info = matchinfo; - u_int32_t ctinfo; - const u_int32_t *ctmark = nf_ct_get_mark(skb, &ctinfo); - if (!ctmark) + struct nf_conn *ct; + enum ip_conntrack_info ctinfo; + + ct = nf_ct_get(skb, &ctinfo); + if (!ct) return 0; - return (((*ctmark) & info->mask) == info->mark) ^ info->invert; + return (((ct->mark) & info->mask) == info->mark) ^ info->invert; } static int diff -puN net/netfilter/xt_conntrack.c~git-net net/netfilter/xt_conntrack.c --- a/net/netfilter/xt_conntrack.c~git-net +++ a/net/netfilter/xt_conntrack.c @@ -10,121 +10,15 @@ #include #include - -#if defined(CONFIG_IP_NF_CONNTRACK) || defined(CONFIG_IP_NF_CONNTRACK_MODULE) -#include -#include -#else -#include -#endif - #include #include -#include +#include MODULE_LICENSE("GPL"); MODULE_AUTHOR("Marc Boucher "); MODULE_DESCRIPTION("iptables connection tracking match module"); MODULE_ALIAS("ipt_conntrack"); -#if defined(CONFIG_IP_NF_CONNTRACK) || defined(CONFIG_IP_NF_CONNTRACK_MODULE) - -static int -match(const struct sk_buff *skb, - const struct net_device *in, - const struct net_device *out, - const struct xt_match *match, - const void *matchinfo, - int offset, - unsigned int protoff, - int *hotdrop) -{ - const struct xt_conntrack_info *sinfo = matchinfo; - struct ip_conntrack *ct; - enum ip_conntrack_info ctinfo; - unsigned int statebit; - - ct = ip_conntrack_get((struct sk_buff *)skb, &ctinfo); - -#define FWINV(bool, invflg) ((bool) ^ !!(sinfo->invflags & invflg)) - - if (ct == &ip_conntrack_untracked) - statebit = XT_CONNTRACK_STATE_UNTRACKED; - else if (ct) - statebit = XT_CONNTRACK_STATE_BIT(ctinfo); - else - statebit = XT_CONNTRACK_STATE_INVALID; - - if (sinfo->flags & XT_CONNTRACK_STATE) { - if (ct) { - if (test_bit(IPS_SRC_NAT_BIT, &ct->status)) - statebit |= XT_CONNTRACK_STATE_SNAT; - if (test_bit(IPS_DST_NAT_BIT, &ct->status)) - statebit |= XT_CONNTRACK_STATE_DNAT; - } - if (FWINV((statebit & sinfo->statemask) == 0, - XT_CONNTRACK_STATE)) - return 0; - } - - if (ct == NULL) { - if (sinfo->flags & ~XT_CONNTRACK_STATE) - return 0; - return 1; - } - - if (sinfo->flags & XT_CONNTRACK_PROTO && - FWINV(ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.dst.protonum != - sinfo->tuple[IP_CT_DIR_ORIGINAL].dst.protonum, - XT_CONNTRACK_PROTO)) - return 0; - - if (sinfo->flags & XT_CONNTRACK_ORIGSRC && - FWINV((ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.src.ip & - sinfo->sipmsk[IP_CT_DIR_ORIGINAL].s_addr) != - sinfo->tuple[IP_CT_DIR_ORIGINAL].src.ip, - XT_CONNTRACK_ORIGSRC)) - return 0; - - if (sinfo->flags & XT_CONNTRACK_ORIGDST && - FWINV((ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.dst.ip & - sinfo->dipmsk[IP_CT_DIR_ORIGINAL].s_addr) != - sinfo->tuple[IP_CT_DIR_ORIGINAL].dst.ip, - XT_CONNTRACK_ORIGDST)) - return 0; - - if (sinfo->flags & XT_CONNTRACK_REPLSRC && - FWINV((ct->tuplehash[IP_CT_DIR_REPLY].tuple.src.ip & - sinfo->sipmsk[IP_CT_DIR_REPLY].s_addr) != - sinfo->tuple[IP_CT_DIR_REPLY].src.ip, - XT_CONNTRACK_REPLSRC)) - return 0; - - if (sinfo->flags & XT_CONNTRACK_REPLDST && - FWINV((ct->tuplehash[IP_CT_DIR_REPLY].tuple.dst.ip & - sinfo->dipmsk[IP_CT_DIR_REPLY].s_addr) != - sinfo->tuple[IP_CT_DIR_REPLY].dst.ip, - XT_CONNTRACK_REPLDST)) - return 0; - - if (sinfo->flags & XT_CONNTRACK_STATUS && - FWINV((ct->status & sinfo->statusmask) == 0, - XT_CONNTRACK_STATUS)) - return 0; - - if (sinfo->flags & XT_CONNTRACK_EXPIRES) { - unsigned long expires = timer_pending(&ct->timeout) ? - (ct->timeout.expires - jiffies)/HZ : 0; - - if (FWINV(!(expires >= sinfo->expires_min && - expires <= sinfo->expires_max), - XT_CONNTRACK_EXPIRES)) - return 0; - } - return 1; -} - -#else /* CONFIG_IP_NF_CONNTRACK */ static int match(const struct sk_buff *skb, const struct net_device *in, @@ -220,8 +114,6 @@ match(const struct sk_buff *skb, return 1; } -#endif /* CONFIG_NF_IP_CONNTRACK */ - static int checkentry(const char *tablename, const void *ip, diff -puN net/netfilter/xt_dscp.c~git-net net/netfilter/xt_dscp.c --- a/net/netfilter/xt_dscp.c~git-net +++ a/net/netfilter/xt_dscp.c @@ -1,7 +1,5 @@ /* IP tables module for matching the value of the IPv4/IPv6 DSCP field * - * xt_dscp.c,v 1.3 2002/08/05 19:00:21 laforge Exp - * * (C) 2002 by Harald Welte * * This program is free software; you can redistribute it and/or modify @@ -34,7 +32,7 @@ static int match(const struct sk_buff *s int *hotdrop) { const struct xt_dscp_info *info = matchinfo; - u_int8_t dscp = ipv4_get_dsfield(skb->nh.iph) >> XT_DSCP_SHIFT; + u_int8_t dscp = ipv4_get_dsfield(ip_hdr(skb)) >> XT_DSCP_SHIFT; return (dscp == info->dscp) ^ !!info->invert; } @@ -49,7 +47,7 @@ static int match6(const struct sk_buff * int *hotdrop) { const struct xt_dscp_info *info = matchinfo; - u_int8_t dscp = ipv6_get_dsfield(skb->nh.ipv6h) >> XT_DSCP_SHIFT; + u_int8_t dscp = ipv6_get_dsfield(ipv6_hdr(skb)) >> XT_DSCP_SHIFT; return (dscp == info->dscp) ^ !!info->invert; } diff -puN net/netfilter/xt_hashlimit.c~git-net net/netfilter/xt_hashlimit.c --- a/net/netfilter/xt_hashlimit.c~git-net +++ a/net/netfilter/xt_hashlimit.c @@ -216,10 +216,8 @@ static int htable_create(struct xt_hashl hinfo->pde->proc_fops = &dl_file_ops; hinfo->pde->data = hinfo; - init_timer(&hinfo->timer); + setup_timer(&hinfo->timer, htable_gc, (unsigned long )hinfo); hinfo->timer.expires = jiffies + msecs_to_jiffies(hinfo->cfg.gc_interval); - hinfo->timer.data = (unsigned long )hinfo; - hinfo->timer.function = htable_gc; add_timer(&hinfo->timer); spin_lock_bh(&hashlimit_lock); @@ -380,22 +378,22 @@ hashlimit_init_dst(struct xt_hashlimit_h switch (hinfo->family) { case AF_INET: if (hinfo->cfg.mode & XT_HASHLIMIT_HASH_DIP) - dst->addr.ip.dst = skb->nh.iph->daddr; + dst->addr.ip.dst = ip_hdr(skb)->daddr; if (hinfo->cfg.mode & XT_HASHLIMIT_HASH_SIP) - dst->addr.ip.src = skb->nh.iph->saddr; + dst->addr.ip.src = ip_hdr(skb)->saddr; if (!(hinfo->cfg.mode & (XT_HASHLIMIT_HASH_DPT | XT_HASHLIMIT_HASH_SPT))) return 0; - nexthdr = skb->nh.iph->protocol; + nexthdr = ip_hdr(skb)->protocol; break; #if defined(CONFIG_IP6_NF_IPTABLES) || defined(CONFIG_IP6_NF_IPTABLES_MODULE) case AF_INET6: if (hinfo->cfg.mode & XT_HASHLIMIT_HASH_DIP) - memcpy(&dst->addr.ip6.dst, &skb->nh.ipv6h->daddr, + memcpy(&dst->addr.ip6.dst, &ipv6_hdr(skb)->daddr, sizeof(dst->addr.ip6.dst)); if (hinfo->cfg.mode & XT_HASHLIMIT_HASH_SIP) - memcpy(&dst->addr.ip6.src, &skb->nh.ipv6h->saddr, + memcpy(&dst->addr.ip6.src, &ipv6_hdr(skb)->saddr, sizeof(dst->addr.ip6.src)); if (!(hinfo->cfg.mode & diff -puN net/netfilter/xt_helper.c~git-net net/netfilter/xt_helper.c --- a/net/netfilter/xt_helper.c~git-net +++ a/net/netfilter/xt_helper.c @@ -5,26 +5,16 @@ * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as * published by the Free Software Foundation. - * - * 19 Mar 2002 Harald Welte : - * - Port to newnat infrastructure */ #include #include #include -#if defined(CONFIG_IP_NF_CONNTRACK) || defined(CONFIG_IP_NF_CONNTRACK_MODULE) -#include -#include -#include -#else #include #include #include -#endif #include #include -#include MODULE_LICENSE("GPL"); MODULE_AUTHOR("Martin Josefsson "); @@ -38,55 +28,6 @@ MODULE_ALIAS("ip6t_helper"); #define DEBUGP(format, args...) #endif -#if defined(CONFIG_IP_NF_CONNTRACK) || defined(CONFIG_IP_NF_CONNTRACK_MODULE) -static int -match(const struct sk_buff *skb, - const struct net_device *in, - const struct net_device *out, - const struct xt_match *match, - const void *matchinfo, - int offset, - unsigned int protoff, - int *hotdrop) -{ - const struct xt_helper_info *info = matchinfo; - struct ip_conntrack *ct; - enum ip_conntrack_info ctinfo; - int ret = info->invert; - - ct = ip_conntrack_get((struct sk_buff *)skb, &ctinfo); - if (!ct) { - DEBUGP("xt_helper: Eek! invalid conntrack?\n"); - return ret; - } - - if (!ct->master) { - DEBUGP("xt_helper: conntrack %p has no master\n", ct); - return ret; - } - - read_lock_bh(&ip_conntrack_lock); - if (!ct->master->helper) { - DEBUGP("xt_helper: master ct %p has no helper\n", - exp->expectant); - goto out_unlock; - } - - DEBUGP("master's name = %s , info->name = %s\n", - ct->master->helper->name, info->name); - - if (info->name[0] == '\0') - ret ^= 1; - else - ret ^= !strncmp(ct->master->helper->name, info->name, - strlen(ct->master->helper->name)); -out_unlock: - read_unlock_bh(&ip_conntrack_lock); - return ret; -} - -#else /* CONFIG_IP_NF_CONNTRACK */ - static int match(const struct sk_buff *skb, const struct net_device *in, @@ -134,7 +75,6 @@ out_unlock: read_unlock_bh(&nf_conntrack_lock); return ret; } -#endif static int check(const char *tablename, const void *inf, diff -puN net/netfilter/xt_length.c~git-net net/netfilter/xt_length.c --- a/net/netfilter/xt_length.c~git-net +++ a/net/netfilter/xt_length.c @@ -31,7 +31,7 @@ match(const struct sk_buff *skb, int *hotdrop) { const struct xt_length_info *info = matchinfo; - u_int16_t pktlen = ntohs(skb->nh.iph->tot_len); + u_int16_t pktlen = ntohs(ip_hdr(skb)->tot_len); return (pktlen >= info->min && pktlen <= info->max) ^ info->invert; } @@ -47,7 +47,8 @@ match6(const struct sk_buff *skb, int *hotdrop) { const struct xt_length_info *info = matchinfo; - u_int16_t pktlen = ntohs(skb->nh.ipv6h->payload_len) + sizeof(struct ipv6hdr); + const u_int16_t pktlen = (ntohs(ipv6_hdr(skb)->payload_len) + + sizeof(struct ipv6hdr)); return (pktlen >= info->min && pktlen <= info->max) ^ info->invert; } diff -puN net/netfilter/xt_limit.c~git-net net/netfilter/xt_limit.c --- a/net/netfilter/xt_limit.c~git-net +++ a/net/netfilter/xt_limit.c @@ -1,10 +1,3 @@ -/* Kernel module to control the rate - * - * 2 September 1999: Changed from the target RATE to the match - * `limit', removed logging. Did I mention that - * Alexey is a fucking genius? - * Rusty Russell (rusty@rustcorp.com.au). */ - /* (C) 1999 Jérôme de Vivie * (C) 1999 Hervé Eychenne * diff -puN net/netfilter/xt_mac.c~git-net net/netfilter/xt_mac.c --- a/net/netfilter/xt_mac.c~git-net +++ a/net/netfilter/xt_mac.c @@ -37,8 +37,8 @@ match(const struct sk_buff *skb, const struct xt_mac_info *info = matchinfo; /* Is mac pointer valid? */ - return (skb->mac.raw >= skb->head - && (skb->mac.raw + ETH_HLEN) <= skb->data + return (skb_mac_header(skb) >= skb->head && + (skb_mac_header(skb) + ETH_HLEN) <= skb->data /* If so, compare... */ && ((!compare_ether_addr(eth_hdr(skb)->h_source, info->srcaddr)) ^ info->invert)); diff -puN net/netfilter/xt_pkttype.c~git-net net/netfilter/xt_pkttype.c --- a/net/netfilter/xt_pkttype.c~git-net +++ a/net/netfilter/xt_pkttype.c @@ -34,7 +34,7 @@ static int match(const struct sk_buff *s const struct xt_pkttype_info *info = matchinfo; if (skb->pkt_type == PACKET_LOOPBACK) - type = (MULTICAST(skb->nh.iph->daddr) + type = (MULTICAST(ip_hdr(skb)->daddr) ? PACKET_MULTICAST : PACKET_BROADCAST); else diff -puN net/netfilter/xt_realm.c~git-net net/netfilter/xt_realm.c --- a/net/netfilter/xt_realm.c~git-net +++ a/net/netfilter/xt_realm.c @@ -1,7 +1,5 @@ /* IP tables module for matching the routing realm * - * $Id: ipt_realm.c,v 1.3 2004/03/05 13:25:40 laforge Exp $ - * * (C) 2003 by Sampsa Ranta * * This program is free software; you can redistribute it and/or modify diff -puN net/netfilter/xt_state.c~git-net net/netfilter/xt_state.c --- a/net/netfilter/xt_state.c~git-net +++ a/net/netfilter/xt_state.c @@ -10,7 +10,7 @@ #include #include -#include +#include #include #include @@ -36,7 +36,7 @@ match(const struct sk_buff *skb, if (nf_ct_is_untracked(skb)) statebit = XT_STATE_UNTRACKED; - else if (!nf_ct_get_ctinfo(skb, &ctinfo)) + else if (!nf_ct_get(skb, &ctinfo)) statebit = XT_STATE_INVALID; else statebit = XT_STATE_BIT(ctinfo); diff -puN net/netlink/af_netlink.c~git-net net/netlink/af_netlink.c --- a/net/netlink/af_netlink.c~git-net +++ a/net/netlink/af_netlink.c @@ -56,6 +56,7 @@ #include #include #include +#include #include #include @@ -76,7 +77,8 @@ struct netlink_sock { unsigned long state; wait_queue_head_t wait; struct netlink_callback *cb; - spinlock_t cb_lock; + struct mutex *cb_mutex; + struct mutex cb_def_mutex; void (*data_ready)(struct sock *sk, int bytes); struct module *module; }; @@ -108,6 +110,7 @@ struct netlink_table { unsigned long *listeners; unsigned int nl_nonroot; unsigned int groups; + struct mutex *cb_mutex; struct module *module; int registered; }; @@ -370,7 +373,8 @@ static struct proto netlink_proto = { .obj_size = sizeof(struct netlink_sock), }; -static int __netlink_create(struct socket *sock, int protocol) +static int __netlink_create(struct socket *sock, struct mutex *cb_mutex, + int protocol) { struct sock *sk; struct netlink_sock *nlk; @@ -384,7 +388,8 @@ static int __netlink_create(struct socke sock_init_data(sock, sk); nlk = nlk_sk(sk); - spin_lock_init(&nlk->cb_lock); + nlk->cb_mutex = cb_mutex ? : &nlk->cb_def_mutex; + mutex_init(nlk->cb_mutex); init_waitqueue_head(&nlk->wait); sk->sk_destruct = netlink_sock_destruct; @@ -395,8 +400,8 @@ static int __netlink_create(struct socke static int netlink_create(struct socket *sock, int protocol) { struct module *module = NULL; + struct mutex *cb_mutex; struct netlink_sock *nlk; - unsigned int groups; int err = 0; sock->state = SS_UNCONNECTED; @@ -418,10 +423,10 @@ static int netlink_create(struct socket if (nl_table[protocol].registered && try_module_get(nl_table[protocol].module)) module = nl_table[protocol].module; - groups = nl_table[protocol].groups; + cb_mutex = nl_table[protocol].cb_mutex; netlink_unlock_table(); - if ((err = __netlink_create(sock, protocol)) < 0) + if ((err = __netlink_create(sock, cb_mutex, protocol)) < 0) goto out_module; nlk = nlk_sk(sock->sk); @@ -446,14 +451,14 @@ static int netlink_release(struct socket sock_orphan(sk); nlk = nlk_sk(sk); - spin_lock(&nlk->cb_lock); + mutex_lock(nlk->cb_mutex); if (nlk->cb) { if (nlk->cb->done) nlk->cb->done(nlk->cb); netlink_destroy_callback(nlk->cb); nlk->cb = NULL; } - spin_unlock(&nlk->cb_lock); + mutex_unlock(nlk->cb_mutex); /* OK. Socket is unlinked, and, therefore, no new packets will arrive */ @@ -1215,7 +1220,7 @@ static int netlink_recvmsg(struct kiocb copied = len; } - skb->h.raw = skb->data; + skb_reset_transport_header(skb); err = skb_copy_datagram_iovec(skb, 0, msg->msg_iov, copied); if (msg->msg_name) { @@ -1242,6 +1247,9 @@ static int netlink_recvmsg(struct kiocb scm_recv(sock, msg, siocb->scm, flags); + if (flags & MSG_TRUNC) + copied = skb->len; + out: netlink_rcv_wake(sk); return err ? : copied; @@ -1265,7 +1273,7 @@ static void netlink_data_ready(struct so struct sock * netlink_kernel_create(int unit, unsigned int groups, void (*input)(struct sock *sk, int len), - struct module *module) + struct mutex *cb_mutex, struct module *module) { struct socket *sock; struct sock *sk; @@ -1280,7 +1288,7 @@ netlink_kernel_create(int unit, unsigned if (sock_create_lite(PF_NETLINK, SOCK_DGRAM, unit, &sock)) return NULL; - if (__netlink_create(sock, unit) < 0) + if (__netlink_create(sock, cb_mutex, unit) < 0) goto out_sock_release; if (groups < 32) @@ -1304,6 +1312,7 @@ netlink_kernel_create(int unit, unsigned netlink_table_grab(); nl_table[unit].groups = groups; nl_table[unit].listeners = listeners; + nl_table[unit].cb_mutex = cb_mutex; nl_table[unit].module = module; nl_table[unit].registered = 1; netlink_table_ungrab(); @@ -1346,7 +1355,7 @@ static int netlink_dump(struct sock *sk) if (!skb) goto errout; - spin_lock(&nlk->cb_lock); + mutex_lock(nlk->cb_mutex); cb = nlk->cb; if (cb == NULL) { @@ -1357,7 +1366,7 @@ static int netlink_dump(struct sock *sk) len = cb->dump(skb, cb); if (len > 0) { - spin_unlock(&nlk->cb_lock); + mutex_unlock(nlk->cb_mutex); skb_queue_tail(&sk->sk_receive_queue, skb); sk->sk_data_ready(sk, len); return 0; @@ -1375,13 +1384,13 @@ static int netlink_dump(struct sock *sk) if (cb->done) cb->done(cb); nlk->cb = NULL; - spin_unlock(&nlk->cb_lock); + mutex_unlock(nlk->cb_mutex); netlink_destroy_callback(cb); return 0; errout_skb: - spin_unlock(&nlk->cb_lock); + mutex_unlock(nlk->cb_mutex); kfree_skb(skb); errout: return err; @@ -1413,19 +1422,24 @@ int netlink_dump_start(struct sock *ssk, } nlk = nlk_sk(sk); /* A dump or destruction is in progress... */ - spin_lock(&nlk->cb_lock); + mutex_lock(nlk->cb_mutex); if (nlk->cb || sock_flag(sk, SOCK_DEAD)) { - spin_unlock(&nlk->cb_lock); + mutex_unlock(nlk->cb_mutex); netlink_destroy_callback(cb); sock_put(sk); return -EBUSY; } nlk->cb = cb; - spin_unlock(&nlk->cb_lock); + mutex_unlock(nlk->cb_mutex); netlink_dump(sk); sock_put(sk); - return 0; + + /* We successfully started a dump, by returning -EINTR we + * signal the queue mangement to interrupt processing of + * any netlink messages so userspace gets a chance to read + * the results. */ + return -EINTR; } void netlink_ack(struct sk_buff *in_skb, struct nlmsghdr *nlh, int err) @@ -1462,27 +1476,35 @@ void netlink_ack(struct sk_buff *in_skb, } static int netlink_rcv_skb(struct sk_buff *skb, int (*cb)(struct sk_buff *, - struct nlmsghdr *, int *)) + struct nlmsghdr *)) { struct nlmsghdr *nlh; int err; while (skb->len >= nlmsg_total_size(0)) { - nlh = (struct nlmsghdr *) skb->data; + nlh = nlmsg_hdr(skb); + err = 0; if (nlh->nlmsg_len < NLMSG_HDRLEN || skb->len < nlh->nlmsg_len) return 0; - if (cb(skb, nlh, &err) < 0) { - /* Not an error, but we have to interrupt processing - * here. Note: that in this case we do not pull - * message from skb, it will be processed later. - */ - if (err == 0) - return -1; + /* Only requests are handled by the kernel */ + if (!(nlh->nlmsg_flags & NLM_F_REQUEST)) + goto skip; + + /* Skip control messages */ + if (nlh->nlmsg_type < NLMSG_MIN_TYPE) + goto skip; + + err = cb(skb, nlh); + if (err == -EINTR) { + /* Not an error, but we interrupt processing */ + netlink_queue_skip(nlh, skb); + return err; + } +skip: + if (nlh->nlmsg_flags & NLM_F_ACK || err) netlink_ack(skb, nlh, err); - } else if (nlh->nlmsg_flags & NLM_F_ACK) - netlink_ack(skb, nlh, 0); netlink_queue_skip(nlh, skb); } @@ -1504,9 +1526,14 @@ static int netlink_rcv_skb(struct sk_buf * * qlen must be initialized to 0 before the initial entry, afterwards * the function may be called repeatedly until qlen reaches 0. + * + * The callback function may return -EINTR to signal that processing + * of netlink messages shall be interrupted. In this case the message + * currently being processed will NOT be requeued onto the receive + * queue. */ void netlink_run_queue(struct sock *sk, unsigned int *qlen, - int (*cb)(struct sk_buff *, struct nlmsghdr *, int *)) + int (*cb)(struct sk_buff *, struct nlmsghdr *)) { struct sk_buff *skb; @@ -1825,7 +1852,6 @@ EXPORT_SYMBOL(netlink_broadcast); EXPORT_SYMBOL(netlink_dump_start); EXPORT_SYMBOL(netlink_kernel_create); EXPORT_SYMBOL(netlink_register_notifier); -EXPORT_SYMBOL(netlink_set_err); EXPORT_SYMBOL(netlink_set_nonroot); EXPORT_SYMBOL(netlink_unicast); EXPORT_SYMBOL(netlink_unregister_notifier); diff -puN net/netlink/attr.c~git-net net/netlink/attr.c --- a/net/netlink/attr.c~git-net +++ a/net/netlink/attr.c @@ -67,6 +67,11 @@ static int validate_nla(struct nlattr *n } break; + case NLA_BINARY: + if (pt->len && attrlen > pt->len) + return -ERANGE; + break; + default: if (pt->len) minlen = pt->len; diff -puN net/netlink/genetlink.c~git-net net/netlink/genetlink.c --- a/net/netlink/genetlink.c~git-net +++ a/net/netlink/genetlink.c @@ -295,66 +295,46 @@ int genl_unregister_family(struct genl_f return -ENOENT; } -static int genl_rcv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, - int *errp) +static int genl_rcv_msg(struct sk_buff *skb, struct nlmsghdr *nlh) { struct genl_ops *ops; struct genl_family *family; struct genl_info info; struct genlmsghdr *hdr = nlmsg_data(nlh); - int hdrlen, err = -EINVAL; - - if (!(nlh->nlmsg_flags & NLM_F_REQUEST)) - goto ignore; - - if (nlh->nlmsg_type < NLMSG_MIN_TYPE) - goto ignore; + int hdrlen, err; family = genl_family_find_byid(nlh->nlmsg_type); - if (family == NULL) { - err = -ENOENT; - goto errout; - } + if (family == NULL) + return -ENOENT; hdrlen = GENL_HDRLEN + family->hdrsize; if (nlh->nlmsg_len < nlmsg_msg_size(hdrlen)) - goto errout; + return -EINVAL; ops = genl_get_cmd(hdr->cmd, family); - if (ops == NULL) { - err = -EOPNOTSUPP; - goto errout; - } + if (ops == NULL) + return -EOPNOTSUPP; - if ((ops->flags & GENL_ADMIN_PERM) && security_netlink_recv(skb, CAP_NET_ADMIN)) { - err = -EPERM; - goto errout; - } + if ((ops->flags & GENL_ADMIN_PERM) && + security_netlink_recv(skb, CAP_NET_ADMIN)) + return -EPERM; if (nlh->nlmsg_flags & NLM_F_DUMP) { - if (ops->dumpit == NULL) { - err = -EOPNOTSUPP; - goto errout; - } + if (ops->dumpit == NULL) + return -EOPNOTSUPP; - *errp = err = netlink_dump_start(genl_sock, skb, nlh, - ops->dumpit, ops->done); - if (err == 0) - skb_pull(skb, min(NLMSG_ALIGN(nlh->nlmsg_len), - skb->len)); - return -1; + return netlink_dump_start(genl_sock, skb, nlh, + ops->dumpit, ops->done); } - if (ops->doit == NULL) { - err = -EOPNOTSUPP; - goto errout; - } + if (ops->doit == NULL) + return -EOPNOTSUPP; if (family->attrbuf) { err = nlmsg_parse(nlh, hdrlen, family->attrbuf, family->maxattr, ops->policy); if (err < 0) - goto errout; + return err; } info.snd_seq = nlh->nlmsg_seq; @@ -364,15 +344,7 @@ static int genl_rcv_msg(struct sk_buff * info.userhdr = nlmsg_data(nlh) + GENL_HDRLEN; info.attrs = family->attrbuf; - *errp = err = ops->doit(skb, &info); - return err; - -ignore: - return 0; - -errout: - *errp = err; - return -1; + return ops->doit(skb, &info); } static void genl_rcv(struct sock *sk, int len) @@ -586,7 +558,7 @@ static int __init genl_init(void) netlink_set_nonroot(NETLINK_GENERIC, NL_NONROOT_RECV); genl_sock = netlink_kernel_create(NETLINK_GENERIC, GENL_MAX_ID, - genl_rcv, THIS_MODULE); + genl_rcv, NULL, THIS_MODULE); if (genl_sock == NULL) panic("GENL: Cannot initialize generic netlink\n"); diff -puN net/netrom/af_netrom.c~git-net net/netrom/af_netrom.c --- a/net/netrom/af_netrom.c~git-net +++ a/net/netrom/af_netrom.c @@ -625,42 +625,42 @@ static int nr_connect(struct socket *soc ax25_address *source = NULL; ax25_uid_assoc *user; struct net_device *dev; + int err = 0; lock_sock(sk); if (sk->sk_state == TCP_ESTABLISHED && sock->state == SS_CONNECTING) { sock->state = SS_CONNECTED; - release_sock(sk); - return 0; /* Connect completed during a ERESTARTSYS event */ + goto out_release; /* Connect completed during a ERESTARTSYS event */ } if (sk->sk_state == TCP_CLOSE && sock->state == SS_CONNECTING) { sock->state = SS_UNCONNECTED; - release_sock(sk); - return -ECONNREFUSED; + err = -ECONNREFUSED; + goto out_release; } if (sk->sk_state == TCP_ESTABLISHED) { - release_sock(sk); - return -EISCONN; /* No reconnect on a seqpacket socket */ + err = -EISCONN; /* No reconnect on a seqpacket socket */ + goto out_release; } sk->sk_state = TCP_CLOSE; sock->state = SS_UNCONNECTED; if (addr_len != sizeof(struct sockaddr_ax25) && addr_len != sizeof(struct full_sockaddr_ax25)) { - release_sock(sk); - return -EINVAL; + err = -EINVAL; + goto out_release; } if (addr->sax25_family != AF_NETROM) { - release_sock(sk); - return -EINVAL; + err = -EINVAL; + goto out_release; } if (sock_flag(sk, SOCK_ZAPPED)) { /* Must bind first - autobinding in this may or may not work */ sock_reset_flag(sk, SOCK_ZAPPED); if ((dev = nr_dev_first()) == NULL) { - release_sock(sk); - return -ENETUNREACH; + err = -ENETUNREACH; + goto out_release; } source = (ax25_address *)dev->dev_addr; @@ -671,8 +671,8 @@ static int nr_connect(struct socket *soc } else { if (ax25_uid_policy && !capable(CAP_NET_ADMIN)) { dev_put(dev); - release_sock(sk); - return -EPERM; + err = -EPERM; + goto out_release; } nr->user_addr = *source; } @@ -707,8 +707,8 @@ static int nr_connect(struct socket *soc /* Now the loop */ if (sk->sk_state != TCP_ESTABLISHED && (flags & O_NONBLOCK)) { - release_sock(sk); - return -EINPROGRESS; + err = -EINPROGRESS; + goto out_release; } /* @@ -716,46 +716,46 @@ static int nr_connect(struct socket *soc * closed. */ if (sk->sk_state == TCP_SYN_SENT) { - struct task_struct *tsk = current; - DECLARE_WAITQUEUE(wait, tsk); + DEFINE_WAIT(wait); - add_wait_queue(sk->sk_sleep, &wait); for (;;) { - set_current_state(TASK_INTERRUPTIBLE); + prepare_to_wait(sk->sk_sleep, &wait, + TASK_INTERRUPTIBLE); if (sk->sk_state != TCP_SYN_SENT) break; - release_sock(sk); - if (!signal_pending(tsk)) { + if (!signal_pending(current)) { + release_sock(sk); schedule(); lock_sock(sk); continue; } - current->state = TASK_RUNNING; - remove_wait_queue(sk->sk_sleep, &wait); - return -ERESTARTSYS; + err = -ERESTARTSYS; + break; } - current->state = TASK_RUNNING; - remove_wait_queue(sk->sk_sleep, &wait); + finish_wait(sk->sk_sleep, &wait); + if (err) + goto out_release; } if (sk->sk_state != TCP_ESTABLISHED) { sock->state = SS_UNCONNECTED; - release_sock(sk); - return sock_error(sk); /* Always set at this point */ + err = sock_error(sk); /* Always set at this point */ + goto out_release; } sock->state = SS_CONNECTED; + +out_release: release_sock(sk); - return 0; + return err; } static int nr_accept(struct socket *sock, struct socket *newsock, int flags) { - struct task_struct *tsk = current; - DECLARE_WAITQUEUE(wait, tsk); struct sk_buff *skb; struct sock *newsk; + DEFINE_WAIT(wait); struct sock *sk; int err = 0; @@ -765,42 +765,40 @@ static int nr_accept(struct socket *sock lock_sock(sk); if (sk->sk_type != SOCK_SEQPACKET) { err = -EOPNOTSUPP; - goto out; + goto out_release; } if (sk->sk_state != TCP_LISTEN) { err = -EINVAL; - goto out; + goto out_release; } /* * The write queue this time is holding sockets ready to use * hooked into the SABM we saved */ - add_wait_queue(sk->sk_sleep, &wait); for (;;) { + prepare_to_wait(sk->sk_sleep, &wait, TASK_INTERRUPTIBLE); skb = skb_dequeue(&sk->sk_receive_queue); if (skb) break; - current->state = TASK_INTERRUPTIBLE; - release_sock(sk); if (flags & O_NONBLOCK) { - current->state = TASK_RUNNING; - remove_wait_queue(sk->sk_sleep, &wait); - return -EWOULDBLOCK; + err = -EWOULDBLOCK; + break; } - if (!signal_pending(tsk)) { + if (!signal_pending(current)) { + release_sock(sk); schedule(); lock_sock(sk); continue; } - current->state = TASK_RUNNING; - remove_wait_queue(sk->sk_sleep, &wait); - return -ERESTARTSYS; + err = -ERESTARTSYS; + break; } - current->state = TASK_RUNNING; - remove_wait_queue(sk->sk_sleep, &wait); + finish_wait(sk->sk_sleep, &wait); + if (err) + goto out_release; newsk = skb->sk; newsk->sk_socket = newsock; @@ -811,8 +809,9 @@ static int nr_accept(struct socket *sock sk_acceptq_removed(sk); newsock->sk = newsk; -out: +out_release: release_sock(sk); + return err; } @@ -878,7 +877,7 @@ int nr_rx_frame(struct sk_buff *skb, str if (frametype == NR_PROTOEXT && circuit_index == NR_PROTO_IP && circuit_id == NR_PROTO_IP) { skb_pull(skb, NR_NETWORK_LEN + NR_TRANSPORT_LEN); - skb->h.raw = skb->data; + skb_reset_transport_header(skb); return nr_rx_ip(skb, dev); } @@ -904,7 +903,7 @@ int nr_rx_frame(struct sk_buff *skb, str } if (sk != NULL) { - skb->h.raw = skb->data; + skb_reset_transport_header(skb); if (frametype == NR_CONNACK && skb->len == 22) nr_sk(sk)->bpqext = 1; @@ -1074,6 +1073,7 @@ static int nr_sendmsg(struct kiocb *iocb goto out; skb_reserve(skb, size - len); + skb_reset_transport_header(skb); /* * Push down the NET/ROM header @@ -1094,14 +1094,12 @@ static int nr_sendmsg(struct kiocb *iocb /* * Put the data on the end */ + skb_put(skb, len); - skb->h.raw = skb_put(skb, len); - - asmptr = skb->h.raw; SOCK_DEBUG(sk, "NET/ROM: Appending user data\n"); /* User data follows immediately after the NET/ROM transport header */ - if (memcpy_fromiovec(asmptr, msg->msg_iov, len)) { + if (memcpy_fromiovec(skb_transport_header(skb), msg->msg_iov, len)) { kfree_skb(skb); err = -EFAULT; goto out; @@ -1149,7 +1147,7 @@ static int nr_recvmsg(struct kiocb *iocb return er; } - skb->h.raw = skb->data; + skb_reset_transport_header(skb); copied = skb->len; if (copied > size) { @@ -1161,7 +1159,8 @@ static int nr_recvmsg(struct kiocb *iocb if (sax != NULL) { sax->sax25_family = AF_NETROM; - memcpy(sax->sax25_call.ax25_call, skb->data + 7, AX25_ADDR_LEN); + skb_copy_from_linear_data_offset(skb, 7, sax->sax25_call.ax25_call, + AX25_ADDR_LEN); } msg->msg_namelen = sizeof(*sax); @@ -1209,6 +1208,12 @@ static int nr_ioctl(struct socket *sock, release_sock(sk); return ret; + case SIOCGSTAMPNS: + lock_sock(sk); + ret = sock_get_timestampns(sk, argp); + release_sock(sk); + return ret; + case SIOCGIFADDR: case SIOCSIFADDR: case SIOCGIFDSTADDR: diff -puN net/netrom/nr_dev.c~git-net net/netrom/nr_dev.c --- a/net/netrom/nr_dev.c~git-net +++ a/net/netrom/nr_dev.c @@ -56,8 +56,8 @@ int nr_rx_ip(struct sk_buff *skb, struct /* Spoof incoming device */ skb->dev = dev; - skb->mac.raw = skb->nh.raw; - skb->nh.raw = skb->data; + skb_reset_mac_header(skb); + skb_reset_network_header(skb); skb->pkt_type = PACKET_HOST; netif_rx(skb); diff -puN net/netrom/nr_in.c~git-net net/netrom/nr_in.c --- a/net/netrom/nr_in.c~git-net +++ a/net/netrom/nr_in.c @@ -51,10 +51,12 @@ static int nr_queue_rx_frame(struct sock if ((skbn = alloc_skb(nr->fraglen, GFP_ATOMIC)) == NULL) return 1; - skbn->h.raw = skbn->data; + skb_reset_transport_header(skbn); while ((skbo = skb_dequeue(&nr->frag_queue)) != NULL) { - memcpy(skb_put(skbn, skbo->len), skbo->data, skbo->len); + skb_copy_from_linear_data(skbo, + skb_put(skbn, skbo->len), + skbo->len); kfree_skb(skbo); } diff -puN net/netrom/nr_loopback.c~git-net net/netrom/nr_loopback.c --- a/net/netrom/nr_loopback.c~git-net +++ a/net/netrom/nr_loopback.c @@ -34,8 +34,8 @@ int nr_loopback_queue(struct sk_buff *sk struct sk_buff *skbn; if ((skbn = alloc_skb(skb->len, GFP_ATOMIC)) != NULL) { - memcpy(skb_put(skbn, skb->len), skb->data, skb->len); - skbn->h.raw = skbn->data; + skb_copy_from_linear_data(skb, skb_put(skbn, skb->len), skb->len); + skb_reset_transport_header(skbn); skb_queue_tail(&loopback_queue, skbn); diff -puN net/netrom/nr_out.c~git-net net/netrom/nr_out.c --- a/net/netrom/nr_out.c~git-net +++ a/net/netrom/nr_out.c @@ -40,7 +40,7 @@ void nr_output(struct sock *sk, struct s if (skb->len - NR_TRANSPORT_LEN > NR_MAX_PACKET_SIZE) { /* Save a copy of the Transport Header */ - memcpy(transport, skb->data, NR_TRANSPORT_LEN); + skb_copy_from_linear_data(skb, transport, NR_TRANSPORT_LEN); skb_pull(skb, NR_TRANSPORT_LEN); frontlen = skb_headroom(skb); @@ -54,13 +54,13 @@ void nr_output(struct sock *sk, struct s len = (NR_MAX_PACKET_SIZE > skb->len) ? skb->len : NR_MAX_PACKET_SIZE; /* Copy the user data */ - memcpy(skb_put(skbn, len), skb->data, len); + skb_copy_from_linear_data(skb, skb_put(skbn, len), len); skb_pull(skb, len); /* Duplicate the Transport Header */ skb_push(skbn, NR_TRANSPORT_LEN); - memcpy(skbn->data, transport, NR_TRANSPORT_LEN); - + skb_copy_to_linear_data(skbn, transport, + NR_TRANSPORT_LEN); if (skb->len > 0) skbn->data[4] |= NR_MORE_FLAG; diff -puN net/netrom/nr_subr.c~git-net net/netrom/nr_subr.c --- a/net/netrom/nr_subr.c~git-net +++ a/net/netrom/nr_subr.c @@ -226,13 +226,13 @@ void __nr_transmit_reply(struct sk_buff dptr = skb_put(skbn, NR_NETWORK_LEN + NR_TRANSPORT_LEN); - memcpy(dptr, skb->data + 7, AX25_ADDR_LEN); + skb_copy_from_linear_data_offset(skb, 7, dptr, AX25_ADDR_LEN); dptr[6] &= ~AX25_CBIT; dptr[6] &= ~AX25_EBIT; dptr[6] |= AX25_SSSID_SPARE; dptr += AX25_ADDR_LEN; - memcpy(dptr, skb->data + 0, AX25_ADDR_LEN); + skb_copy_from_linear_data(skb, dptr, AX25_ADDR_LEN); dptr[6] &= ~AX25_CBIT; dptr[6] |= AX25_EBIT; dptr[6] |= AX25_SSSID_SPARE; diff -puN net/packet/af_packet.c~git-net net/packet/af_packet.c --- a/net/packet/af_packet.c~git-net +++ a/net/packet/af_packet.c @@ -114,22 +114,22 @@ On receive: ----------- Incoming, dev->hard_header!=NULL - mac.raw -> ll header - data -> data + mac_header -> ll header + data -> data Outgoing, dev->hard_header!=NULL - mac.raw -> ll header - data -> ll header + mac_header -> ll header + data -> ll header Incoming, dev->hard_header==NULL - mac.raw -> UNKNOWN position. It is very likely, that it points to ll header. - PPP makes it, that is wrong, because introduce assymetry - between rx and tx paths. - data -> data + mac_header -> UNKNOWN position. It is very likely, that it points to ll + header. PPP makes it, that is wrong, because introduce + assymetry between rx and tx paths. + data -> data Outgoing, dev->hard_header==NULL - mac.raw -> data. ll header is still not built! - data -> data + mac_header -> data. ll header is still not built! + data -> data Resume If dev->hard_header==NULL we are unlikely to restore sensible ll header. @@ -139,12 +139,12 @@ On transmit: ------------ dev->hard_header != NULL - mac.raw -> ll header - data -> ll header + mac_header -> ll header + data -> ll header dev->hard_header == NULL (ll header is added by device, we cannot control it) - mac.raw -> data - data -> data + mac_header -> data + data -> data We should set nh.raw on output to correct posistion, packet classifier depends on it. @@ -201,7 +201,8 @@ struct packet_sock { struct packet_type prot_hook; spinlock_t bind_lock; unsigned int running:1, /* prot_hook is attached*/ - auxdata:1; + auxdata:1, + origdev:1; int ifindex; /* bound device */ __be16 num; #ifdef CONFIG_PACKET_MULTICAST @@ -284,7 +285,7 @@ static int packet_rcv_spkt(struct sk_buf * Incoming packets have ll header pulled, * push it back. * - * For outgoing ones skb->data == skb->mac.raw + * For outgoing ones skb->data == skb_mac_header(skb) * so that this procedure is noop. */ @@ -303,7 +304,7 @@ static int packet_rcv_spkt(struct sk_buf spkt = &PACKET_SKB_CB(skb)->sa.pkt; - skb_push(skb, skb->data-skb->mac.raw); + skb_push(skb, skb->data - skb_mac_header(skb)); /* * The SOCK_PACKET socket receives _all_ frames. @@ -401,14 +402,14 @@ static int packet_sendmsg_spkt(struct ki * notable one here. This should really be fixed at the driver level. */ skb_reserve(skb, LL_RESERVED_SPACE(dev)); - skb->nh.raw = skb->data; + skb_reset_network_header(skb); /* Try to align data part correctly */ if (dev->hard_header) { skb->data -= dev->hard_header_len; skb->tail -= dev->hard_header_len; if (len < dev->hard_header_len) - skb->nh.raw = skb->data; + skb_reset_network_header(skb); } /* Returns -EFAULT on error */ @@ -488,10 +489,10 @@ static int packet_rcv(struct sk_buff *sk never delivered to user. */ if (sk->sk_type != SOCK_DGRAM) - skb_push(skb, skb->data - skb->mac.raw); + skb_push(skb, skb->data - skb_mac_header(skb)); else if (skb->pkt_type == PACKET_OUTGOING) { /* Special case: outgoing packets have ll header at head */ - skb_pull(skb, skb->nh.raw - skb->data); + skb_pull(skb, skb_network_offset(skb)); } } @@ -528,7 +529,10 @@ static int packet_rcv(struct sk_buff *sk sll->sll_hatype = dev->type; sll->sll_protocol = skb->protocol; sll->sll_pkttype = skb->pkt_type; - sll->sll_ifindex = dev->ifindex; + if (unlikely(po->origdev) && skb->pkt_type == PACKET_HOST) + sll->sll_ifindex = orig_dev->ifindex; + else + sll->sll_ifindex = dev->ifindex; sll->sll_halen = 0; if (dev->hard_header_parse) @@ -582,6 +586,7 @@ static int tpacket_rcv(struct sk_buff *s unsigned long status = TP_STATUS_LOSING|TP_STATUS_USER; unsigned short macoff, netoff; struct sk_buff *copy_skb = NULL; + struct timeval tv; if (skb->pkt_type == PACKET_LOOPBACK) goto drop; @@ -591,10 +596,10 @@ static int tpacket_rcv(struct sk_buff *s if (dev->hard_header) { if (sk->sk_type != SOCK_DGRAM) - skb_push(skb, skb->data - skb->mac.raw); + skb_push(skb, skb->data - skb_mac_header(skb)); else if (skb->pkt_type == PACKET_OUTGOING) { /* Special case: outgoing packets have ll header at head */ - skb_pull(skb, skb->nh.raw - skb->data); + skb_pull(skb, skb_network_offset(skb)); } } @@ -612,7 +617,7 @@ static int tpacket_rcv(struct sk_buff *s if (sk->sk_type == SOCK_DGRAM) { macoff = netoff = TPACKET_ALIGN(TPACKET_HDRLEN) + 16; } else { - unsigned maclen = skb->nh.raw - skb->data; + unsigned maclen = skb_network_offset(skb); netoff = TPACKET_ALIGN(TPACKET_HDRLEN + (maclen < 16 ? 16 : maclen)); macoff = netoff - maclen; } @@ -656,12 +661,13 @@ static int tpacket_rcv(struct sk_buff *s h->tp_snaplen = snaplen; h->tp_mac = macoff; h->tp_net = netoff; - if (skb->tstamp.off_sec == 0) { + if (skb->tstamp.tv64 == 0) { __net_timestamp(skb); sock_enable_timestamp(sk); } - h->tp_sec = skb->tstamp.off_sec; - h->tp_usec = skb->tstamp.off_usec; + tv = ktime_to_timeval(skb->tstamp); + h->tp_sec = tv.tv_sec; + h->tp_usec = tv.tv_usec; sll = (struct sockaddr_ll*)((u8*)h + TPACKET_ALIGN(sizeof(*h))); sll->sll_halen = 0; @@ -671,7 +677,10 @@ static int tpacket_rcv(struct sk_buff *s sll->sll_hatype = dev->type; sll->sll_protocol = skb->protocol; sll->sll_pkttype = skb->pkt_type; - sll->sll_ifindex = dev->ifindex; + if (unlikely(po->origdev) && skb->pkt_type == PACKET_HOST) + sll->sll_ifindex = orig_dev->ifindex; + else + sll->sll_ifindex = dev->ifindex; h->tp_status = status; smp_mb(); @@ -766,14 +775,14 @@ static int packet_sendmsg(struct kiocb * goto out_unlock; skb_reserve(skb, LL_RESERVED_SPACE(dev)); - skb->nh.raw = skb->data; + skb_reset_network_header(skb); if (dev->hard_header) { int res; err = -EINVAL; res = dev->hard_header(skb, dev, ntohs(proto), addr, NULL, len); if (sock->type != SOCK_DGRAM) { - skb->tail = skb->data; + skb_reset_tail_pointer(skb); skb->len = 0; } else if (res < 0) goto out_free; @@ -1143,7 +1152,7 @@ static int packet_recvmsg(struct kiocb * aux.tp_len = PACKET_SKB_CB(skb)->origlen; aux.tp_snaplen = skb->len; aux.tp_mac = 0; - aux.tp_net = skb->nh.raw - skb->data; + aux.tp_net = skb_network_offset(skb); put_cmsg(msg, SOL_PACKET, PACKET_AUXDATA, sizeof(aux), &aux); } @@ -1411,6 +1420,18 @@ packet_setsockopt(struct socket *sock, i po->auxdata = !!val; return 0; } + case PACKET_ORIGDEV: + { + int val; + + if (optlen < sizeof(val)) + return -EINVAL; + if (copy_from_user(&val, optval, sizeof(val))) + return -EFAULT; + + po->origdev = !!val; + return 0; + } default: return -ENOPROTOOPT; } @@ -1454,6 +1475,13 @@ static int packet_getsockopt(struct sock data = &val; break; + case PACKET_ORIGDEV: + if (len > sizeof(int)) + len = sizeof(int); + val = po->origdev; + + data = &val; + break; default: return -ENOPROTOOPT; } @@ -1543,6 +1571,8 @@ static int packet_ioctl(struct socket *s } case SIOCGSTAMP: return sock_get_timestamp(sk, (struct timeval __user *)arg); + case SIOCGSTAMPNS: + return sock_get_timestampns(sk, (struct timespec __user *)arg); #ifdef CONFIG_INET case SIOCADDRT: diff -puN net/rose/af_rose.c~git-net net/rose/af_rose.c --- a/net/rose/af_rose.c~git-net +++ a/net/rose/af_rose.c @@ -812,26 +812,26 @@ rose_try_next_neigh: * closed. */ if (sk->sk_state == TCP_SYN_SENT) { - struct task_struct *tsk = current; - DECLARE_WAITQUEUE(wait, tsk); + DEFINE_WAIT(wait); - add_wait_queue(sk->sk_sleep, &wait); for (;;) { - set_current_state(TASK_INTERRUPTIBLE); + prepare_to_wait(sk->sk_sleep, &wait, + TASK_INTERRUPTIBLE); if (sk->sk_state != TCP_SYN_SENT) break; - release_sock(sk); - if (!signal_pending(tsk)) { + if (!signal_pending(current)) { + release_sock(sk); schedule(); lock_sock(sk); continue; } - current->state = TASK_RUNNING; - remove_wait_queue(sk->sk_sleep, &wait); - return -ERESTARTSYS; + err = -ERESTARTSYS; + break; } - current->state = TASK_RUNNING; - remove_wait_queue(sk->sk_sleep, &wait); + finish_wait(sk->sk_sleep, &wait); + + if (err) + goto out_release; } if (sk->sk_state != TCP_ESTABLISHED) { @@ -856,10 +856,9 @@ out_release: static int rose_accept(struct socket *sock, struct socket *newsock, int flags) { - struct task_struct *tsk = current; - DECLARE_WAITQUEUE(wait, tsk); struct sk_buff *skb; struct sock *newsk; + DEFINE_WAIT(wait); struct sock *sk; int err = 0; @@ -869,42 +868,41 @@ static int rose_accept(struct socket *so lock_sock(sk); if (sk->sk_type != SOCK_SEQPACKET) { err = -EOPNOTSUPP; - goto out; + goto out_release; } if (sk->sk_state != TCP_LISTEN) { err = -EINVAL; - goto out; + goto out_release; } /* * The write queue this time is holding sockets ready to use * hooked into the SABM we saved */ - add_wait_queue(sk->sk_sleep, &wait); for (;;) { + prepare_to_wait(sk->sk_sleep, &wait, TASK_INTERRUPTIBLE); + skb = skb_dequeue(&sk->sk_receive_queue); if (skb) break; - current->state = TASK_INTERRUPTIBLE; - release_sock(sk); if (flags & O_NONBLOCK) { - current->state = TASK_RUNNING; - remove_wait_queue(sk->sk_sleep, &wait); - return -EWOULDBLOCK; + err = -EWOULDBLOCK; + break; } - if (!signal_pending(tsk)) { + if (!signal_pending(current)) { + release_sock(sk); schedule(); lock_sock(sk); continue; } - current->state = TASK_RUNNING; - remove_wait_queue(sk->sk_sleep, &wait); - return -ERESTARTSYS; + err = -ERESTARTSYS; + break; } - current->state = TASK_RUNNING; - remove_wait_queue(sk->sk_sleep, &wait); + finish_wait(sk->sk_sleep, &wait); + if (err) + goto out_release; newsk = skb->sk; newsk->sk_socket = newsock; @@ -916,7 +914,7 @@ static int rose_accept(struct socket *so sk->sk_ack_backlog--; newsock->sk = newsk; -out: +out_release: release_sock(sk); return err; @@ -1105,9 +1103,10 @@ static int rose_sendmsg(struct kiocb *io */ SOCK_DEBUG(sk, "ROSE: Appending user data\n"); - asmptr = skb->h.raw = skb_put(skb, len); + skb_reset_transport_header(skb); + skb_put(skb, len); - err = memcpy_fromiovec(asmptr, msg->msg_iov, len); + err = memcpy_fromiovec(skb_transport_header(skb), msg->msg_iov, len); if (err) { kfree_skb(skb); return err; @@ -1155,7 +1154,7 @@ static int rose_sendmsg(struct kiocb *io int lg; /* Save a copy of the Header */ - memcpy(header, skb->data, ROSE_MIN_LEN); + skb_copy_from_linear_data(skb, header, ROSE_MIN_LEN); skb_pull(skb, ROSE_MIN_LEN); frontlen = skb_headroom(skb); @@ -1175,12 +1174,12 @@ static int rose_sendmsg(struct kiocb *io lg = (ROSE_PACLEN > skb->len) ? skb->len : ROSE_PACLEN; /* Copy the user data */ - memcpy(skb_put(skbn, lg), skb->data, lg); + skb_copy_from_linear_data(skb, skb_put(skbn, lg), lg); skb_pull(skb, lg); /* Duplicate the Header */ skb_push(skbn, ROSE_MIN_LEN); - memcpy(skbn->data, header, ROSE_MIN_LEN); + skb_copy_to_linear_data(skbn, header, ROSE_MIN_LEN); if (skb->len > 0) skbn->data[2] |= M_BIT; @@ -1234,7 +1233,7 @@ static int rose_recvmsg(struct kiocb *io *asmptr = qbit; } - skb->h.raw = skb->data; + skb_reset_transport_header(skb); copied = skb->len; if (copied > size) { @@ -1296,6 +1295,9 @@ static int rose_ioctl(struct socket *soc case SIOCGSTAMP: return sock_get_timestamp(sk, (struct timeval __user *) argp); + case SIOCGSTAMPNS: + return sock_get_timestampns(sk, (struct timespec __user *) argp); + case SIOCGIFADDR: case SIOCSIFADDR: case SIOCGIFDSTADDR: diff -puN net/rose/rose_loopback.c~git-net net/rose/rose_loopback.c --- a/net/rose/rose_loopback.c~git-net +++ a/net/rose/rose_loopback.c @@ -77,7 +77,7 @@ static void rose_loopback_timer(unsigned dest = (rose_address *)(skb->data + 4); lci_o = 0xFFF - lci_i; - skb->h.raw = skb->data; + skb_reset_transport_header(skb); sk = rose_find_socket(lci_o, &rose_loopback_neigh); if (sk) { diff -puN net/rose/rose_route.c~git-net net/rose/rose_route.c --- a/net/rose/rose_route.c~git-net +++ a/net/rose/rose_route.c @@ -906,7 +906,7 @@ int rose_route_frame(struct sk_buff *skb } } else { - skb->h.raw = skb->data; + skb_reset_transport_header(skb); res = rose_process_rx_frame(sk, skb); goto out; } diff -puN net/rxrpc/connection.c~git-net net/rxrpc/connection.c --- a/net/rxrpc/connection.c~git-net +++ a/net/rxrpc/connection.c @@ -229,10 +229,10 @@ int rxrpc_connection_lookup(struct rxrpc _enter("%p{{%hu}},%u,%hu", peer, peer->trans->port, - ntohs(pkt->h.uh->source), + ntohs(udp_hdr(pkt)->source), ntohs(msg->hdr.serviceId)); - x_port = pkt->h.uh->source; + x_port = udp_hdr(pkt)->source; x_epoch = msg->hdr.epoch; x_clflag = msg->hdr.flags & RXRPC_CLIENT_INITIATED; x_connid = htonl(ntohl(msg->hdr.cid) & RXRPC_CIDMASK); @@ -267,7 +267,7 @@ int rxrpc_connection_lookup(struct rxrpc /* fill in the specifics */ candidate->addr.sin_family = AF_INET; candidate->addr.sin_port = x_port; - candidate->addr.sin_addr.s_addr = pkt->nh.iph->saddr; + candidate->addr.sin_addr.s_addr = ip_hdr(pkt)->saddr; candidate->in_epoch = x_epoch; candidate->out_epoch = x_epoch; candidate->in_clientflag = RXRPC_CLIENT_INITIATED; diff -puN net/rxrpc/main.c~git-net net/rxrpc/main.c --- a/net/rxrpc/main.c~git-net +++ a/net/rxrpc/main.c @@ -37,7 +37,7 @@ static int __init rxrpc_initialise(void) int ret; /* my epoch value */ - rxrpc_epoch = htonl(xtime.tv_sec); + rxrpc_epoch = htonl(get_seconds()); /* register the /proc interface */ #ifdef CONFIG_PROC_FS diff -puN net/rxrpc/transport.c~git-net net/rxrpc/transport.c --- a/net/rxrpc/transport.c~git-net +++ a/net/rxrpc/transport.c @@ -478,8 +478,8 @@ void rxrpc_trans_receive_packet(struct r return; } - addr = pkt->nh.iph->saddr; - port = pkt->h.uh->source; + addr = ip_hdr(pkt)->saddr; + port = udp_hdr(pkt)->source; _net("Rx Received UDP packet from %08x:%04hu", ntohl(addr), ntohs(port)); @@ -625,8 +625,8 @@ int rxrpc_trans_immediate_abort(struct r memset(&sin,0,sizeof(sin)); sin.sin_family = AF_INET; - sin.sin_port = msg->pkt->h.uh->source; - sin.sin_addr.s_addr = msg->pkt->nh.iph->saddr; + sin.sin_port = udp_hdr(msg->pkt)->source; + sin.sin_addr.s_addr = ip_hdr(msg->pkt)->saddr; msghdr.msg_name = &sin; msghdr.msg_namelen = sizeof(sin); diff -puN net/sched/Kconfig~git-net net/sched/Kconfig --- a/net/sched/Kconfig~git-net +++ a/net/sched/Kconfig @@ -46,62 +46,6 @@ config NET_SCH_FIFO if NET_SCHED -choice - prompt "Packet scheduler clock source" - default NET_SCH_CLK_GETTIMEOFDAY - ---help--- - Packet schedulers need a monotonic clock that increments at a static - rate. The kernel provides several suitable interfaces, each with - different properties: - - - high resolution (us or better) - - fast to read (minimal locking, no i/o access) - - synchronized on all processors - - handles cpu clock frequency changes - - but nothing provides all of the above. - -config NET_SCH_CLK_JIFFIES - bool "Timer interrupt" - ---help--- - Say Y here if you want to use the timer interrupt (jiffies) as clock - source. This clock source is fast, synchronized on all processors and - handles cpu clock frequency changes, but its resolution is too low - for accurate shaping except at very low speed. - -config NET_SCH_CLK_GETTIMEOFDAY - bool "gettimeofday" - ---help--- - Say Y here if you want to use gettimeofday as clock source. This clock - source has high resolution, is synchronized on all processors and - handles cpu clock frequency changes, but it is slow. - - Choose this if you need a high resolution clock source but can't use - the CPU's cycle counter. - -# don't allow on SMP x86 because they can have unsynchronized TSCs. -# gettimeofday is a good alternative -config NET_SCH_CLK_CPU - bool "CPU cycle counter" - depends on ((X86_TSC || X86_64) && !SMP) || ALPHA || SPARC64 || PPC64 || IA64 - ---help--- - Say Y here if you want to use the CPU's cycle counter as clock source. - This is a cheap and high resolution clock source, but on some - architectures it is not synchronized on all processors and doesn't - handle cpu clock frequency changes. - - The useable cycle counters are: - - x86/x86_64 - Timestamp Counter - alpha - Cycle Counter - sparc64 - %ticks register - ppc64 - Time base - ia64 - Interval Time Counter - - Choose this if your CPU's cycle counter is working properly. - -endchoice - comment "Queueing/Scheduling" config NET_SCH_CBQ diff -puN net/sched/act_api.c~git-net net/sched/act_api.c --- a/net/sched/act_api.c~git-net +++ a/net/sched/act_api.c @@ -25,12 +25,12 @@ #include #include #include -#include #include #include #include #include #include +#include void tcf_hash_destroy(struct tcf_common *p, struct tcf_hashinfo *hinfo) { @@ -93,15 +93,15 @@ static int tcf_dump_walker(struct sk_buf continue; a->priv = p; a->order = n_i; - r = (struct rtattr*) skb->tail; + r = (struct rtattr *)skb_tail_pointer(skb); RTA_PUT(skb, a->order, 0, NULL); err = tcf_action_dump_1(skb, a, 0, 0); if (err < 0) { index--; - skb_trim(skb, (u8*)r - skb->data); + nlmsg_trim(skb, r); goto done; } - r->rta_len = skb->tail - (u8*)r; + r->rta_len = skb_tail_pointer(skb) - (u8 *)r; n_i++; if (n_i >= TCA_ACT_MAX_PRIO) goto done; @@ -114,7 +114,7 @@ done: return n_i; rtattr_failure: - skb_trim(skb, (u8*)r - skb->data); + nlmsg_trim(skb, r); goto done; } @@ -125,7 +125,7 @@ static int tcf_del_walker(struct sk_buff struct rtattr *r ; int i= 0, n_i = 0; - r = (struct rtattr*) skb->tail; + r = (struct rtattr *)skb_tail_pointer(skb); RTA_PUT(skb, a->order, 0, NULL); RTA_PUT(skb, TCA_KIND, IFNAMSIZ, a->ops->kind); for (i = 0; i < (hinfo->hmask + 1); i++) { @@ -140,11 +140,11 @@ static int tcf_del_walker(struct sk_buff } } RTA_PUT(skb, TCA_FCNT, 4, &n_i); - r->rta_len = skb->tail - (u8*)r; + r->rta_len = skb_tail_pointer(skb) - (u8 *)r; return n_i; rtattr_failure: - skb_trim(skb, (u8*)r - skb->data); + nlmsg_trim(skb, r); return -EINVAL; } @@ -423,7 +423,7 @@ int tcf_action_dump_1(struct sk_buff *skb, struct tc_action *a, int bind, int ref) { int err = -EINVAL; - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); struct rtattr *r; if (a->ops == NULL || a->ops->dump == NULL) @@ -432,15 +432,15 @@ tcf_action_dump_1(struct sk_buff *skb, s RTA_PUT(skb, TCA_KIND, IFNAMSIZ, a->ops->kind); if (tcf_action_copy_stats(skb, a, 0)) goto rtattr_failure; - r = (struct rtattr*) skb->tail; + r = (struct rtattr *)skb_tail_pointer(skb); RTA_PUT(skb, TCA_OPTIONS, 0, NULL); if ((err = tcf_action_dump_old(skb, a, bind, ref)) > 0) { - r->rta_len = skb->tail - (u8*)r; + r->rta_len = skb_tail_pointer(skb) - (u8 *)r; return err; } rtattr_failure: - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); return -1; } @@ -449,17 +449,17 @@ tcf_action_dump(struct sk_buff *skb, str { struct tc_action *a; int err = -EINVAL; - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); struct rtattr *r ; while ((a = act) != NULL) { - r = (struct rtattr*) skb->tail; + r = (struct rtattr *)skb_tail_pointer(skb); act = a->next; RTA_PUT(skb, a->order, 0, NULL); err = tcf_action_dump_1(skb, a, bind, ref); if (err < 0) goto errout; - r->rta_len = skb->tail - (u8*)r; + r->rta_len = skb_tail_pointer(skb) - (u8 *)r; } return 0; @@ -467,7 +467,7 @@ tcf_action_dump(struct sk_buff *skb, str rtattr_failure: err = -EINVAL; errout: - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); return err; } @@ -635,7 +635,7 @@ tca_get_fill(struct sk_buff *skb, struct { struct tcamsg *t; struct nlmsghdr *nlh; - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); struct rtattr *x; nlh = NLMSG_NEW(skb, pid, seq, event, sizeof(*t), flags); @@ -645,20 +645,20 @@ tca_get_fill(struct sk_buff *skb, struct t->tca__pad1 = 0; t->tca__pad2 = 0; - x = (struct rtattr*) skb->tail; + x = (struct rtattr *)skb_tail_pointer(skb); RTA_PUT(skb, TCA_ACT_TAB, 0, NULL); if (tcf_action_dump(skb, a, bind, ref) < 0) goto rtattr_failure; - x->rta_len = skb->tail - (u8*)x; + x->rta_len = skb_tail_pointer(skb) - (u8 *)x; - nlh->nlmsg_len = skb->tail - b; + nlh->nlmsg_len = skb_tail_pointer(skb) - b; return skb->len; rtattr_failure: nlmsg_failure: - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); return -1; } @@ -767,7 +767,7 @@ static int tca_action_flush(struct rtatt return -ENOBUFS; } - b = (unsigned char *)skb->tail; + b = skb_tail_pointer(skb); if (rtattr_parse_nested(tb, TCA_ACT_MAX, rta) < 0) goto err_out; @@ -783,16 +783,16 @@ static int tca_action_flush(struct rtatt t->tca__pad1 = 0; t->tca__pad2 = 0; - x = (struct rtattr *) skb->tail; + x = (struct rtattr *)skb_tail_pointer(skb); RTA_PUT(skb, TCA_ACT_TAB, 0, NULL); err = a->ops->walk(skb, &dcb, RTM_DELACTION, a); if (err < 0) goto rtattr_failure; - x->rta_len = skb->tail - (u8 *) x; + x->rta_len = skb_tail_pointer(skb) - (u8 *)x; - nlh->nlmsg_len = skb->tail - b; + nlh->nlmsg_len = skb_tail_pointer(skb) - b; nlh->nlmsg_flags |= NLM_F_ROOT; module_put(a->ops->owner); kfree(a); @@ -884,7 +884,7 @@ static int tcf_add_notify(struct tc_acti if (!skb) return -ENOBUFS; - b = (unsigned char *)skb->tail; + b = skb_tail_pointer(skb); nlh = NLMSG_NEW(skb, pid, seq, event, sizeof(*t), flags); t = NLMSG_DATA(nlh); @@ -892,15 +892,15 @@ static int tcf_add_notify(struct tc_acti t->tca__pad1 = 0; t->tca__pad2 = 0; - x = (struct rtattr*) skb->tail; + x = (struct rtattr *)skb_tail_pointer(skb); RTA_PUT(skb, TCA_ACT_TAB, 0, NULL); if (tcf_action_dump(skb, a, 0, 0) < 0) goto rtattr_failure; - x->rta_len = skb->tail - (u8*)x; + x->rta_len = skb_tail_pointer(skb) - (u8 *)x; - nlh->nlmsg_len = skb->tail - b; + nlh->nlmsg_len = skb_tail_pointer(skb) - b; NETLINK_CB(skb).dst_group = RTNLGRP_TC; err = rtnetlink_send(skb, pid, RTNLGRP_TC, flags&NLM_F_ECHO); @@ -1015,7 +1015,7 @@ static int tc_dump_action(struct sk_buff *skb, struct netlink_callback *cb) { struct nlmsghdr *nlh; - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); struct rtattr *x; struct tc_action_ops *a_o; struct tc_action a; @@ -1048,7 +1048,7 @@ tc_dump_action(struct sk_buff *skb, stru t->tca__pad1 = 0; t->tca__pad2 = 0; - x = (struct rtattr *) skb->tail; + x = (struct rtattr *)skb_tail_pointer(skb); RTA_PUT(skb, TCA_ACT_TAB, 0, NULL); ret = a_o->walk(skb, cb, RTM_GETACTION, &a); @@ -1056,12 +1056,12 @@ tc_dump_action(struct sk_buff *skb, stru goto rtattr_failure; if (ret > 0) { - x->rta_len = skb->tail - (u8 *) x; + x->rta_len = skb_tail_pointer(skb) - (u8 *)x; ret = skb->len; } else - skb_trim(skb, (u8*)x - skb->data); + nlmsg_trim(skb, x); - nlh->nlmsg_len = skb->tail - b; + nlh->nlmsg_len = skb_tail_pointer(skb) - b; if (NETLINK_CB(cb->skb).pid && ret) nlh->nlmsg_flags |= NLM_F_MULTI; module_put(a_o->owner); @@ -1070,20 +1070,15 @@ tc_dump_action(struct sk_buff *skb, stru rtattr_failure: nlmsg_failure: module_put(a_o->owner); - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); return skb->len; } static int __init tc_action_init(void) { - struct rtnetlink_link *link_p = rtnetlink_links[PF_UNSPEC]; - - if (link_p) { - link_p[RTM_NEWACTION-RTM_BASE].doit = tc_ctl_action; - link_p[RTM_DELACTION-RTM_BASE].doit = tc_ctl_action; - link_p[RTM_GETACTION-RTM_BASE].doit = tc_ctl_action; - link_p[RTM_GETACTION-RTM_BASE].dumpit = tc_dump_action; - } + rtnl_register(PF_UNSPEC, RTM_NEWACTION, tc_ctl_action, NULL); + rtnl_register(PF_UNSPEC, RTM_DELACTION, tc_ctl_action, NULL); + rtnl_register(PF_UNSPEC, RTM_GETACTION, tc_ctl_action, tc_dump_action); return 0; } diff -puN net/sched/act_gact.c~git-net net/sched/act_gact.c --- a/net/sched/act_gact.c~git-net +++ a/net/sched/act_gact.c @@ -28,6 +28,7 @@ #include #include #include +#include #include #include #include @@ -155,7 +156,7 @@ static int tcf_gact(struct sk_buff *skb, static int tcf_gact_dump(struct sk_buff *skb, struct tc_action *a, int bind, int ref) { - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); struct tc_gact opt; struct tcf_gact *gact = a->priv; struct tcf_t t; @@ -181,7 +182,7 @@ static int tcf_gact_dump(struct sk_buff return skb->len; rtattr_failure: - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); return -1; } diff -puN net/sched/act_ipt.c~git-net net/sched/act_ipt.c --- a/net/sched/act_ipt.c~git-net +++ a/net/sched/act_ipt.c @@ -30,6 +30,7 @@ #include #include #include +#include #include #include #include @@ -245,7 +246,7 @@ static int tcf_ipt(struct sk_buff *skb, static int tcf_ipt_dump(struct sk_buff *skb, struct tc_action *a, int bind, int ref) { - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); struct tcf_ipt *ipt = a->priv; struct ipt_entry_target *t; struct tcf_t tm; @@ -277,7 +278,7 @@ static int tcf_ipt_dump(struct sk_buff * return skb->len; rtattr_failure: - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); kfree(t); return -1; } diff -puN net/sched/act_mirred.c~git-net net/sched/act_mirred.c --- a/net/sched/act_mirred.c~git-net +++ a/net/sched/act_mirred.c @@ -30,6 +30,7 @@ #include #include #include +#include #include #include #include @@ -206,7 +207,7 @@ bad_mirred: static int tcf_mirred_dump(struct sk_buff *skb, struct tc_action *a, int bind, int ref) { - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); struct tcf_mirred *m = a->priv; struct tc_mirred opt; struct tcf_t t; @@ -225,7 +226,7 @@ static int tcf_mirred_dump(struct sk_buf return skb->len; rtattr_failure: - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); return -1; } diff -puN net/sched/act_pedit.c~git-net net/sched/act_pedit.c --- a/net/sched/act_pedit.c~git-net +++ a/net/sched/act_pedit.c @@ -27,6 +27,7 @@ #include #include #include +#include #include #include #include @@ -136,7 +137,7 @@ static int tcf_pedit(struct sk_buff *skb } } - pptr = skb->nh.raw; + pptr = skb_network_header(skb); spin_lock(&p->tcf_lock); @@ -195,7 +196,7 @@ done: static int tcf_pedit_dump(struct sk_buff *skb, struct tc_action *a, int bind, int ref) { - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); struct tcf_pedit *p = a->priv; struct tc_pedit *opt; struct tcf_t t; @@ -226,7 +227,7 @@ static int tcf_pedit_dump(struct sk_buff return skb->len; rtattr_failure: - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); kfree(opt); return -1; } diff -puN net/sched/act_police.c~git-net net/sched/act_police.c --- a/net/sched/act_police.c~git-net +++ a/net/sched/act_police.c @@ -30,6 +30,7 @@ #include #include #include +#include #define L2T(p,L) ((p)->tcfp_R_tab->data[(L)>>(p)->tcfp_R_tab->rate.cell_log]) #define L2T_P(p,L) ((p)->tcfp_P_tab->data[(L)>>(p)->tcfp_P_tab->rate.cell_log]) @@ -80,7 +81,7 @@ static int tcf_act_police_walker(struct continue; a->priv = p; a->order = index; - r = (struct rtattr*) skb->tail; + r = (struct rtattr *)skb_tail_pointer(skb); RTA_PUT(skb, a->order, 0, NULL); if (type == RTM_DELACTION) err = tcf_action_dump_1(skb, a, 0, 1); @@ -88,10 +89,10 @@ static int tcf_act_police_walker(struct err = tcf_action_dump_1(skb, a, 0, 0); if (err < 0) { index--; - skb_trim(skb, (u8*)r - skb->data); + nlmsg_trim(skb, r); goto done; } - r->rta_len = skb->tail - (u8*)r; + r->rta_len = skb_tail_pointer(skb) - (u8 *)r; n_i++; } } @@ -102,7 +103,7 @@ done: return n_i; rtattr_failure: - skb_trim(skb, (u8*)r - skb->data); + nlmsg_trim(skb, r); goto done; } #endif @@ -240,7 +241,7 @@ override: if (ret != ACT_P_CREATED) return ret; - PSCHED_GET_TIME(police->tcfp_t_c); + police->tcfp_t_c = psched_get_time(); police->tcf_index = parm->index ? parm->index : tcf_hash_new_index(&police_idx_gen, &police_hash_info); h = tcf_hash(police->tcf_index, POL_TAB_MASK); @@ -295,10 +296,9 @@ static int tcf_act_police(struct sk_buff return police->tcfp_result; } - PSCHED_GET_TIME(now); - - toks = PSCHED_TDIFF_SAFE(now, police->tcfp_t_c, - police->tcfp_burst); + now = psched_get_time(); + toks = psched_tdiff_bounded(now, police->tcfp_t_c, + police->tcfp_burst); if (police->tcfp_P_tab) { ptoks = toks + police->tcfp_ptoks; if (ptoks > (long)L2T_P(police, police->tcfp_mtu)) @@ -326,7 +326,7 @@ static int tcf_act_police(struct sk_buff static int tcf_act_police_dump(struct sk_buff *skb, struct tc_action *a, int bind, int ref) { - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); struct tcf_police *police = a->priv; struct tc_police opt; @@ -355,7 +355,7 @@ tcf_act_police_dump(struct sk_buff *skb, return skb->len; rtattr_failure: - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); return -1; } @@ -494,7 +494,7 @@ struct tcf_police *tcf_police_locate(str } if (police->tcfp_P_tab) police->tcfp_ptoks = L2T_P(police, police->tcfp_mtu); - PSCHED_GET_TIME(police->tcfp_t_c); + police->tcfp_t_c = psched_get_time(); police->tcf_index = parm->index ? parm->index : tcf_police_new_index(); police->tcf_action = parm->action; @@ -542,9 +542,9 @@ int tcf_police(struct sk_buff *skb, stru return police->tcfp_result; } - PSCHED_GET_TIME(now); - toks = PSCHED_TDIFF_SAFE(now, police->tcfp_t_c, - police->tcfp_burst); + now = psched_get_time(); + toks = psched_tdiff_bounded(now, police->tcfp_t_c, + police->tcfp_burst); if (police->tcfp_P_tab) { ptoks = toks + police->tcfp_ptoks; if (ptoks > (long)L2T_P(police, police->tcfp_mtu)) @@ -572,7 +572,7 @@ EXPORT_SYMBOL(tcf_police); int tcf_police_dump(struct sk_buff *skb, struct tcf_police *police) { - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); struct tc_police opt; opt.index = police->tcf_index; @@ -598,7 +598,7 @@ int tcf_police_dump(struct sk_buff *skb, return skb->len; rtattr_failure: - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); return -1; } diff -puN net/sched/act_simple.c~git-net net/sched/act_simple.c --- a/net/sched/act_simple.c~git-net +++ a/net/sched/act_simple.c @@ -16,6 +16,7 @@ #include #include #include +#include #include #define TCA_ACT_SIMP 22 @@ -155,7 +156,7 @@ static inline int tcf_simp_cleanup(struc static inline int tcf_simp_dump(struct sk_buff *skb, struct tc_action *a, int bind, int ref) { - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); struct tcf_defact *d = a->priv; struct tc_defact opt; struct tcf_t t; @@ -173,7 +174,7 @@ static inline int tcf_simp_dump(struct s return skb->len; rtattr_failure: - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); return -1; } diff -puN net/sched/cls_api.c~git-net net/sched/cls_api.c --- a/net/sched/cls_api.c~git-net +++ a/net/sched/cls_api.c @@ -29,9 +29,10 @@ #include #include #include -#include #include #include +#include +#include #include #include #include @@ -323,7 +324,7 @@ tcf_fill_node(struct sk_buff *skb, struc { struct tcmsg *tcm; struct nlmsghdr *nlh; - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); nlh = NLMSG_NEW(skb, pid, seq, event, sizeof(*tcm), flags); tcm = NLMSG_DATA(nlh); @@ -340,12 +341,12 @@ tcf_fill_node(struct sk_buff *skb, struc if (tp->ops->dump && tp->ops->dump(tp, fh, skb, tcm) < 0) goto rtattr_failure; } - nlh->nlmsg_len = skb->tail - b; + nlh->nlmsg_len = skb_tail_pointer(skb) - b; return skb->len; nlmsg_failure: rtattr_failure: - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); return -1; } @@ -399,7 +400,6 @@ static int tc_dump_tfilter(struct sk_buf if ((dev = dev_get_by_index(tcm->tcm_ifindex)) == NULL) return skb->len; - read_lock(&qdisc_tree_lock); if (!tcm->tcm_parent) q = dev->qdisc_sleeping; else @@ -456,7 +456,6 @@ errout: if (cl) cops->put(q, cl); out: - read_unlock(&qdisc_tree_lock); dev_put(dev); return skb->len; } @@ -563,30 +562,30 @@ tcf_exts_dump(struct sk_buff *skb, struc * to work with both old and new modes of entering * tc data even if iproute2 was newer - jhs */ - struct rtattr * p_rta = (struct rtattr*) skb->tail; + struct rtattr *p_rta = (struct rtattr *)skb_tail_pointer(skb); if (exts->action->type != TCA_OLD_COMPAT) { RTA_PUT(skb, map->action, 0, NULL); if (tcf_action_dump(skb, exts->action, 0, 0) < 0) goto rtattr_failure; - p_rta->rta_len = skb->tail - (u8*)p_rta; + p_rta->rta_len = skb_tail_pointer(skb) - (u8 *)p_rta; } else if (map->police) { RTA_PUT(skb, map->police, 0, NULL); if (tcf_action_dump_old(skb, exts->action, 0, 0) < 0) goto rtattr_failure; - p_rta->rta_len = skb->tail - (u8*)p_rta; + p_rta->rta_len = skb_tail_pointer(skb) - (u8 *)p_rta; } } #elif defined CONFIG_NET_CLS_POLICE if (map->police && exts->police) { - struct rtattr * p_rta = (struct rtattr*) skb->tail; + struct rtattr *p_rta = (struct rtattr *)skb_tail_pointer(skb); RTA_PUT(skb, map->police, 0, NULL); if (tcf_police_dump(skb, exts->police) < 0) goto rtattr_failure; - p_rta->rta_len = skb->tail - (u8*)p_rta; + p_rta->rta_len = skb_tail_pointer(skb) - (u8 *)p_rta; } #endif return 0; @@ -614,18 +613,11 @@ rtattr_failure: __attribute__ ((unused)) static int __init tc_filter_init(void) { - struct rtnetlink_link *link_p = rtnetlink_links[PF_UNSPEC]; + rtnl_register(PF_UNSPEC, RTM_NEWTFILTER, tc_ctl_tfilter, NULL); + rtnl_register(PF_UNSPEC, RTM_DELTFILTER, tc_ctl_tfilter, NULL); + rtnl_register(PF_UNSPEC, RTM_GETTFILTER, tc_ctl_tfilter, + tc_dump_tfilter); - /* Setup rtnetlink links. It is made here to avoid - exporting large number of public symbols. - */ - - if (link_p) { - link_p[RTM_NEWTFILTER-RTM_BASE].doit = tc_ctl_tfilter; - link_p[RTM_DELTFILTER-RTM_BASE].doit = tc_ctl_tfilter; - link_p[RTM_GETTFILTER-RTM_BASE].doit = tc_ctl_tfilter; - link_p[RTM_GETTFILTER-RTM_BASE].dumpit = tc_dump_tfilter; - } return 0; } diff -puN net/sched/cls_basic.c~git-net net/sched/cls_basic.c --- a/net/sched/cls_basic.c~git-net +++ a/net/sched/cls_basic.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include @@ -245,7 +246,7 @@ static int basic_dump(struct tcf_proto * struct sk_buff *skb, struct tcmsg *t) { struct basic_filter *f = (struct basic_filter *) fh; - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); struct rtattr *rta; if (f == NULL) @@ -263,11 +264,11 @@ static int basic_dump(struct tcf_proto * tcf_em_tree_dump(skb, &f->ematches, TCA_BASIC_EMATCHES) < 0) goto rtattr_failure; - rta->rta_len = (skb->tail - b); + rta->rta_len = skb_tail_pointer(skb) - b; return skb->len; rtattr_failure: - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); return -1; } diff -puN net/sched/cls_fw.c~git-net net/sched/cls_fw.c --- a/net/sched/cls_fw.c~git-net +++ a/net/sched/cls_fw.c @@ -38,6 +38,7 @@ #include #include #include +#include #include #include #include @@ -348,7 +349,7 @@ static int fw_dump(struct tcf_proto *tp, { struct fw_head *head = (struct fw_head *)tp->root; struct fw_filter *f = (struct fw_filter*)fh; - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); struct rtattr *rta; if (f == NULL) @@ -374,7 +375,7 @@ static int fw_dump(struct tcf_proto *tp, if (tcf_exts_dump(skb, &f->exts, &fw_ext_map) < 0) goto rtattr_failure; - rta->rta_len = skb->tail - b; + rta->rta_len = skb_tail_pointer(skb) - b; if (tcf_exts_dump_stats(skb, &f->exts, &fw_ext_map) < 0) goto rtattr_failure; @@ -382,7 +383,7 @@ static int fw_dump(struct tcf_proto *tp, return skb->len; rtattr_failure: - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); return -1; } diff -puN net/sched/cls_route.c~git-net net/sched/cls_route.c --- a/net/sched/cls_route.c~git-net +++ a/net/sched/cls_route.c @@ -28,6 +28,7 @@ #include #include #include +#include #include #include #include @@ -88,9 +89,9 @@ static __inline__ int route4_fastmap_has static inline void route4_reset_fastmap(struct net_device *dev, struct route4_head *head, u32 id) { - spin_lock_bh(&dev->queue_lock); + qdisc_lock_tree(dev); memset(head->fastmap, 0, sizeof(head->fastmap)); - spin_unlock_bh(&dev->queue_lock); + qdisc_unlock_tree(dev); } static inline void @@ -562,7 +563,7 @@ static int route4_dump(struct tcf_proto struct sk_buff *skb, struct tcmsg *t) { struct route4_filter *f = (struct route4_filter*)fh; - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); struct rtattr *rta; u32 id; @@ -591,7 +592,7 @@ static int route4_dump(struct tcf_proto if (tcf_exts_dump(skb, &f->exts, &route_ext_map) < 0) goto rtattr_failure; - rta->rta_len = skb->tail - b; + rta->rta_len = skb_tail_pointer(skb) - b; if (tcf_exts_dump_stats(skb, &f->exts, &route_ext_map) < 0) goto rtattr_failure; @@ -599,7 +600,7 @@ static int route4_dump(struct tcf_proto return skb->len; rtattr_failure: - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); return -1; } diff -puN net/sched/cls_rsvp.c~git-net net/sched/cls_rsvp.c --- a/net/sched/cls_rsvp.c~git-net +++ a/net/sched/cls_rsvp.c @@ -31,6 +31,7 @@ #include #include #include +#include #include #include diff -puN net/sched/cls_rsvp.h~git-net net/sched/cls_rsvp.h --- a/net/sched/cls_rsvp.h~git-net +++ a/net/sched/cls_rsvp.h @@ -143,9 +143,9 @@ static int rsvp_classify(struct sk_buff u8 tunnelid = 0; u8 *xprt; #if RSVP_DST_LEN == 4 - struct ipv6hdr *nhptr = skb->nh.ipv6h; + struct ipv6hdr *nhptr = ipv6_hdr(skb); #else - struct iphdr *nhptr = skb->nh.iph; + struct iphdr *nhptr = ip_hdr(skb); #endif restart: @@ -160,7 +160,7 @@ restart: dst = &nhptr->daddr; protocol = nhptr->protocol; xprt = ((u8*)nhptr) + (nhptr->ihl<<2); - if (nhptr->frag_off&__constant_htons(IP_MF|IP_OFFSET)) + if (nhptr->frag_off & htons(IP_MF|IP_OFFSET)) return -1; #endif @@ -593,7 +593,7 @@ static int rsvp_dump(struct tcf_proto *t { struct rsvp_filter *f = (struct rsvp_filter*)fh; struct rsvp_session *s; - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); struct rtattr *rta; struct tc_rsvp_pinfo pinfo; @@ -623,14 +623,14 @@ static int rsvp_dump(struct tcf_proto *t if (tcf_exts_dump(skb, &f->exts, &rsvp_ext_map) < 0) goto rtattr_failure; - rta->rta_len = skb->tail - b; + rta->rta_len = skb_tail_pointer(skb) - b; if (tcf_exts_dump_stats(skb, &f->exts, &rsvp_ext_map) < 0) goto rtattr_failure; return skb->len; rtattr_failure: - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); return -1; } diff -puN net/sched/cls_rsvp6.c~git-net net/sched/cls_rsvp6.c --- a/net/sched/cls_rsvp6.c~git-net +++ a/net/sched/cls_rsvp6.c @@ -34,6 +34,7 @@ #include #include #include +#include #define RSVP_DST_LEN 4 #define RSVP_ID "rsvp6" diff -puN net/sched/cls_tcindex.c~git-net net/sched/cls_tcindex.c --- a/net/sched/cls_tcindex.c~git-net +++ a/net/sched/cls_tcindex.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include @@ -448,7 +449,7 @@ static int tcindex_dump(struct tcf_proto { struct tcindex_data *p = PRIV(tp); struct tcindex_filter_result *r = (struct tcindex_filter_result *) fh; - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); struct rtattr *rta; DPRINTK("tcindex_dump(tp %p,fh 0x%lx,skb %p,t %p),p %p,r %p,b %p\n", @@ -463,7 +464,7 @@ static int tcindex_dump(struct tcf_proto RTA_PUT(skb,TCA_TCINDEX_SHIFT,sizeof(p->shift),&p->shift); RTA_PUT(skb,TCA_TCINDEX_FALL_THROUGH,sizeof(p->fall_through), &p->fall_through); - rta->rta_len = skb->tail-b; + rta->rta_len = skb_tail_pointer(skb) - b; } else { if (p->perfect) { t->tcm_handle = r-p->perfect; @@ -486,7 +487,7 @@ static int tcindex_dump(struct tcf_proto if (tcf_exts_dump(skb, &r->exts, &tcindex_ext_map) < 0) goto rtattr_failure; - rta->rta_len = skb->tail-b; + rta->rta_len = skb_tail_pointer(skb) - b; if (tcf_exts_dump_stats(skb, &r->exts, &tcindex_ext_map) < 0) goto rtattr_failure; @@ -495,7 +496,7 @@ static int tcindex_dump(struct tcf_proto return skb->len; rtattr_failure: - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); return -1; } diff -puN net/sched/cls_u32.c~git-net net/sched/cls_u32.c --- a/net/sched/cls_u32.c~git-net +++ a/net/sched/cls_u32.c @@ -50,6 +50,7 @@ #include #include #include +#include #include #include #include @@ -119,7 +120,7 @@ static int u32_classify(struct sk_buff * } stack[TC_U32_MAXDEPTH]; struct tc_u_hnode *ht = (struct tc_u_hnode*)tp->root; - u8 *ptr = skb->nh.raw; + u8 *ptr = skb_network_header(skb); struct tc_u_knode *n; int sdepth = 0; int off2 = 0; @@ -213,7 +214,7 @@ check_terminal: off2 = 0; } - if (ptr < skb->tail) + if (ptr < skb_tail_pointer(skb)) goto next_ht; } @@ -435,7 +436,7 @@ static void u32_destroy(struct tcf_proto BUG_TRAP(ht->refcnt == 0); kfree(ht); - }; + } kfree(tp_c); } @@ -718,7 +719,7 @@ static int u32_dump(struct tcf_proto *tp struct sk_buff *skb, struct tcmsg *t) { struct tc_u_knode *n = (struct tc_u_knode*)fh; - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); struct rtattr *rta; if (n == NULL) @@ -765,14 +766,14 @@ static int u32_dump(struct tcf_proto *tp #endif } - rta->rta_len = skb->tail - b; + rta->rta_len = skb_tail_pointer(skb) - b; if (TC_U32_KEY(n->handle)) if (tcf_exts_dump_stats(skb, &n->exts, &u32_ext_map) < 0) goto rtattr_failure; return skb->len; rtattr_failure: - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); return -1; } diff -puN net/sched/em_u32.c~git-net net/sched/em_u32.c --- a/net/sched/em_u32.c~git-net +++ a/net/sched/em_u32.c @@ -22,7 +22,7 @@ static int em_u32_match(struct sk_buff * struct tcf_pkt_info *info) { struct tc_u32_key *key = (struct tc_u32_key *) em->data; - unsigned char *ptr = skb->nh.raw; + const unsigned char *ptr = skb_network_header(skb); if (info) { if (info->ptr) diff -puN net/sched/ematch.c~git-net net/sched/ematch.c --- a/net/sched/ematch.c~git-net +++ a/net/sched/ematch.c @@ -418,17 +418,19 @@ void tcf_em_tree_destroy(struct tcf_prot int tcf_em_tree_dump(struct sk_buff *skb, struct tcf_ematch_tree *tree, int tlv) { int i; - struct rtattr * top_start = (struct rtattr*) skb->tail; - struct rtattr * list_start; + u8 *tail; + struct rtattr *top_start = (struct rtattr *)skb_tail_pointer(skb); + struct rtattr *list_start; RTA_PUT(skb, tlv, 0, NULL); RTA_PUT(skb, TCA_EMATCH_TREE_HDR, sizeof(tree->hdr), &tree->hdr); - list_start = (struct rtattr *) skb->tail; + list_start = (struct rtattr *)skb_tail_pointer(skb); RTA_PUT(skb, TCA_EMATCH_TREE_LIST, 0, NULL); + tail = skb_tail_pointer(skb); for (i = 0; i < tree->hdr.nmatches; i++) { - struct rtattr *match_start = (struct rtattr*) skb->tail; + struct rtattr *match_start = (struct rtattr *)tail; struct tcf_ematch *em = tcf_em_get_match(tree, i); struct tcf_ematch_hdr em_hdr = { .kind = em->ops ? em->ops->kind : TCF_EM_CONTAINER, @@ -447,11 +449,12 @@ int tcf_em_tree_dump(struct sk_buff *skb } else if (em->datalen > 0) RTA_PUT_NOHDR(skb, em->datalen, (void *) em->data); - match_start->rta_len = skb->tail - (u8*) match_start; + tail = skb_tail_pointer(skb); + match_start->rta_len = tail - (u8 *)match_start; } - list_start->rta_len = skb->tail - (u8 *) list_start; - top_start->rta_len = skb->tail - (u8 *) top_start; + list_start->rta_len = tail - (u8 *)list_start; + top_start->rta_len = tail - (u8 *)top_start; return 0; diff -puN net/sched/sch_api.c~git-net net/sched/sch_api.c --- a/net/sched/sch_api.c~git-net +++ a/net/sched/sch_api.c @@ -27,14 +27,15 @@ #include #include #include -#include #include #include #include #include #include #include +#include +#include #include #include @@ -190,7 +191,7 @@ int unregister_qdisc(struct Qdisc_ops *q (root qdisc, all its children, children of children etc.) */ -static struct Qdisc *__qdisc_lookup(struct net_device *dev, u32 handle) +struct Qdisc *qdisc_lookup(struct net_device *dev, u32 handle) { struct Qdisc *q; @@ -201,16 +202,6 @@ static struct Qdisc *__qdisc_lookup(stru return NULL; } -struct Qdisc *qdisc_lookup(struct net_device *dev, u32 handle) -{ - struct Qdisc *q; - - read_lock(&qdisc_tree_lock); - q = __qdisc_lookup(dev, handle); - read_unlock(&qdisc_tree_lock); - return q; -} - static struct Qdisc *qdisc_leaf(struct Qdisc *p, u32 classid) { unsigned long cl; @@ -291,6 +282,48 @@ void qdisc_put_rtab(struct qdisc_rate_ta } } +static enum hrtimer_restart qdisc_watchdog(struct hrtimer *timer) +{ + struct qdisc_watchdog *wd = container_of(timer, struct qdisc_watchdog, + timer); + struct net_device *dev = wd->qdisc->dev; + + wd->qdisc->flags &= ~TCQ_F_THROTTLED; + smp_wmb(); + if (spin_trylock(&dev->queue_lock)) { + qdisc_run(dev); + spin_unlock(&dev->queue_lock); + } else + netif_schedule(dev); + + return HRTIMER_NORESTART; +} + +void qdisc_watchdog_init(struct qdisc_watchdog *wd, struct Qdisc *qdisc) +{ + hrtimer_init(&wd->timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS); + wd->timer.function = qdisc_watchdog; + wd->qdisc = qdisc; +} +EXPORT_SYMBOL(qdisc_watchdog_init); + +void qdisc_watchdog_schedule(struct qdisc_watchdog *wd, psched_time_t expires) +{ + ktime_t time; + + wd->qdisc->flags |= TCQ_F_THROTTLED; + time = ktime_set(0, 0); + time = ktime_add_ns(time, PSCHED_US2NS(expires)); + hrtimer_start(&wd->timer, time, HRTIMER_MODE_ABS); +} +EXPORT_SYMBOL(qdisc_watchdog_schedule); + +void qdisc_watchdog_cancel(struct qdisc_watchdog *wd) +{ + hrtimer_cancel(&wd->timer); + wd->qdisc->flags &= ~TCQ_F_THROTTLED; +} +EXPORT_SYMBOL(qdisc_watchdog_cancel); /* Allocate an unique handle from space managed by kernel */ @@ -362,7 +395,7 @@ void qdisc_tree_decrease_qlen(struct Qdi if (n == 0) return; while ((parentid = sch->parent)) { - sch = __qdisc_lookup(sch->dev, TC_H_MAJ(parentid)); + sch = qdisc_lookup(sch->dev, TC_H_MAJ(parentid)); cops = sch->ops->cl_ops; if (cops->qlen_notify) { cl = cops->get(sch, parentid); @@ -467,12 +500,16 @@ qdisc_create(struct net_device *dev, u32 if (handle == TC_H_INGRESS) { sch->flags |= TCQ_F_INGRESS; + sch->stats_lock = &dev->ingress_lock; handle = TC_H_MAKE(TC_H_INGRESS, 0); - } else if (handle == 0) { - handle = qdisc_alloc_handle(dev); - err = -ENOMEM; - if (handle == 0) - goto err_out3; + } else { + sch->stats_lock = &dev->queue_lock; + if (handle == 0) { + handle = qdisc_alloc_handle(dev); + err = -ENOMEM; + if (handle == 0) + goto err_out3; + } } sch->handle = handle; @@ -621,9 +658,9 @@ static int tc_get_qdisc(struct sk_buff * return err; if (q) { qdisc_notify(skb, n, clid, q, NULL); - spin_lock_bh(&dev->queue_lock); + qdisc_lock_tree(dev); qdisc_destroy(q); - spin_unlock_bh(&dev->queue_lock); + qdisc_unlock_tree(dev); } } else { qdisc_notify(skb, n, clid, NULL, q); @@ -756,17 +793,17 @@ graft: err = qdisc_graft(dev, p, clid, q, &old_q); if (err) { if (q) { - spin_lock_bh(&dev->queue_lock); + qdisc_lock_tree(dev); qdisc_destroy(q); - spin_unlock_bh(&dev->queue_lock); + qdisc_unlock_tree(dev); } return err; } qdisc_notify(skb, n, clid, old_q, q); if (old_q) { - spin_lock_bh(&dev->queue_lock); + qdisc_lock_tree(dev); qdisc_destroy(old_q); - spin_unlock_bh(&dev->queue_lock); + qdisc_unlock_tree(dev); } } return 0; @@ -777,7 +814,7 @@ static int tc_fill_qdisc(struct sk_buff { struct tcmsg *tcm; struct nlmsghdr *nlh; - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); struct gnet_dump d; nlh = NLMSG_NEW(skb, pid, seq, event, sizeof(*tcm), flags); @@ -811,12 +848,12 @@ static int tc_fill_qdisc(struct sk_buff if (gnet_stats_finish_copy(&d) < 0) goto rtattr_failure; - nlh->nlmsg_len = skb->tail - b; + nlh->nlmsg_len = skb_tail_pointer(skb) - b; return skb->len; nlmsg_failure: rtattr_failure: - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); return -1; } @@ -862,7 +899,6 @@ static int tc_dump_qdisc(struct sk_buff continue; if (idx > s_idx) s_q_idx = 0; - read_lock(&qdisc_tree_lock); q_idx = 0; list_for_each_entry(q, &dev->qdisc_list, list) { if (q_idx < s_q_idx) { @@ -870,13 +906,10 @@ static int tc_dump_qdisc(struct sk_buff continue; } if (tc_fill_qdisc(skb, q, q->parent, NETLINK_CB(cb->skb).pid, - cb->nlh->nlmsg_seq, NLM_F_MULTI, RTM_NEWQDISC) <= 0) { - read_unlock(&qdisc_tree_lock); + cb->nlh->nlmsg_seq, NLM_F_MULTI, RTM_NEWQDISC) <= 0) goto done; - } q_idx++; } - read_unlock(&qdisc_tree_lock); } done: @@ -1015,7 +1048,7 @@ static int tc_fill_tclass(struct sk_buff { struct tcmsg *tcm; struct nlmsghdr *nlh; - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); struct gnet_dump d; struct Qdisc_class_ops *cl_ops = q->ops->cl_ops; @@ -1040,12 +1073,12 @@ static int tc_fill_tclass(struct sk_buff if (gnet_stats_finish_copy(&d) < 0) goto rtattr_failure; - nlh->nlmsg_len = skb->tail - b; + nlh->nlmsg_len = skb_tail_pointer(skb) - b; return skb->len; nlmsg_failure: rtattr_failure: - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); return -1; } @@ -1099,7 +1132,6 @@ static int tc_dump_tclass(struct sk_buff s_t = cb->args[0]; t = 0; - read_lock(&qdisc_tree_lock); list_for_each_entry(q, &dev->qdisc_list, list) { if (t < s_t || !q->ops->cl_ops || (tcm->tcm_parent && @@ -1121,7 +1153,6 @@ static int tc_dump_tclass(struct sk_buff break; t++; } - read_unlock(&qdisc_tree_lock); cb->args[0] = t; @@ -1146,7 +1177,7 @@ reclassify: for ( ; tp; tp = tp->next) { if ((tp->protocol == protocol || - tp->protocol == __constant_htons(ETH_P_ALL)) && + tp->protocol == htons(ETH_P_ALL)) && (err = tp->classify(skb, tp, res)) >= 0) { #ifdef CONFIG_NET_CLS_ACT if ( TC_ACT_RECLASSIFY == err) { @@ -1175,15 +1206,31 @@ reclassify: return -1; } -static int psched_us_per_tick = 1; -static int psched_tick_per_us = 1; +void tcf_destroy(struct tcf_proto *tp) +{ + tp->ops->destroy(tp); + module_put(tp->ops->owner); + kfree(tp); +} + +void tcf_destroy_chain(struct tcf_proto *fl) +{ + struct tcf_proto *tp; + + while ((tp = fl) != NULL) { + fl = tp->next; + tcf_destroy(tp); + } +} +EXPORT_SYMBOL(tcf_destroy_chain); #ifdef CONFIG_PROC_FS static int psched_show(struct seq_file *seq, void *v) { seq_printf(seq, "%08x %08x %08x %08x\n", - psched_tick_per_us, psched_us_per_tick, - 1000000, HZ); + (u32)NSEC_PER_USEC, (u32)PSCHED_US2NS(1), + 1000000, + (u32)NSEC_PER_SEC/(u32)ktime_to_ns(KTIME_MONOTONIC_RES)); return 0; } @@ -1202,101 +1249,19 @@ static const struct file_operations psch }; #endif -#ifdef CONFIG_NET_SCH_CLK_CPU -psched_tdiff_t psched_clock_per_hz; -int psched_clock_scale; -EXPORT_SYMBOL(psched_clock_per_hz); -EXPORT_SYMBOL(psched_clock_scale); - -psched_time_t psched_time_base; -cycles_t psched_time_mark; -EXPORT_SYMBOL(psched_time_mark); -EXPORT_SYMBOL(psched_time_base); - -/* - * Periodically adjust psched_time_base to avoid overflow - * with 32-bit get_cycles(). Safe up to 4GHz CPU. - */ -static void psched_tick(unsigned long); -static DEFINE_TIMER(psched_timer, psched_tick, 0, 0); - -static void psched_tick(unsigned long dummy) -{ - if (sizeof(cycles_t) == sizeof(u32)) { - psched_time_t dummy_stamp; - PSCHED_GET_TIME(dummy_stamp); - psched_timer.expires = jiffies + 1*HZ; - add_timer(&psched_timer); - } -} - -int __init psched_calibrate_clock(void) -{ - psched_time_t stamp, stamp1; - struct timeval tv, tv1; - psched_tdiff_t delay; - long rdelay; - unsigned long stop; - - psched_tick(0); - stop = jiffies + HZ/10; - PSCHED_GET_TIME(stamp); - do_gettimeofday(&tv); - while (time_before(jiffies, stop)) { - barrier(); - cpu_relax(); - } - PSCHED_GET_TIME(stamp1); - do_gettimeofday(&tv1); - - delay = PSCHED_TDIFF(stamp1, stamp); - rdelay = tv1.tv_usec - tv.tv_usec; - rdelay += (tv1.tv_sec - tv.tv_sec)*1000000; - if (rdelay > delay) - return -1; - delay /= rdelay; - psched_tick_per_us = delay; - while ((delay>>=1) != 0) - psched_clock_scale++; - psched_us_per_tick = 1<>psched_clock_scale; - return 0; -} -#endif - static int __init pktsched_init(void) { - struct rtnetlink_link *link_p; - -#ifdef CONFIG_NET_SCH_CLK_CPU - if (psched_calibrate_clock() < 0) - return -1; -#elif defined(CONFIG_NET_SCH_CLK_JIFFIES) - psched_tick_per_us = HZ< #include #include /* for fput */ +#include #include #include @@ -157,19 +158,6 @@ static unsigned long atm_tc_bind_filter( return atm_tc_get(sch,classid); } - -static void destroy_filters(struct atm_flow_data *flow) -{ - struct tcf_proto *filter; - - while ((filter = flow->filter_list)) { - DPRINTK("destroy_filters: destroying filter %p\n",filter); - flow->filter_list = filter->next; - tcf_destroy(filter); - } -} - - /* * atm_tc_put handles all destructions, including the ones that are explicitly * requested (atm_tc_destroy, etc.). The assumption here is that we never drop @@ -194,7 +182,7 @@ static void atm_tc_put(struct Qdisc *sch *prev = flow->next; DPRINTK("atm_tc_put: qdisc %p\n",flow->q); qdisc_destroy(flow->q); - destroy_filters(flow); + tcf_destroy_chain(flow->filter_list); if (flow->sock) { DPRINTK("atm_tc_put: f_count %d\n", file_count(flow->sock->file)); @@ -503,7 +491,7 @@ static void sch_atm_dequeue(unsigned lon } D2PRINTK("atm_tc_dequeue: sending on class %p\n",flow); /* remove any LL header somebody else has attached */ - skb_pull(skb,(char *) skb->nh.iph-(char *) skb->data); + skb_pull(skb, skb_network_offset(skb)); if (skb_headroom(skb) < flow->hdr_len) { struct sk_buff *new; @@ -513,7 +501,7 @@ static void sch_atm_dequeue(unsigned lon skb = new; } D2PRINTK("sch_atm_dequeue: ip %p, data %p\n", - skb->nh.iph,skb->data); + skb_network_header(skb), skb->data); ATM_SKB(skb)->vcc = flow->vcc; memcpy(skb_push(skb,flow->hdr_len),flow->hdr, flow->hdr_len); @@ -610,7 +598,7 @@ static void atm_tc_destroy(struct Qdisc DPRINTK("atm_tc_destroy(sch %p,[qdisc %p])\n",sch,p); /* races ? */ while ((flow = p->flows)) { - destroy_filters(flow); + tcf_destroy_chain(flow->filter_list); if (flow->ref > 1) printk(KERN_ERR "atm_destroy: %p->ref = %d\n",flow, flow->ref); @@ -631,7 +619,7 @@ static int atm_tc_dump_class(struct Qdis { struct atm_qdisc_data *p = PRIV(sch); struct atm_flow_data *flow = (struct atm_flow_data *) cl; - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); struct rtattr *rta; DPRINTK("atm_tc_dump_class(sch %p,[qdisc %p],flow %p,skb %p,tcm %p)\n", @@ -661,11 +649,11 @@ static int atm_tc_dump_class(struct Qdis RTA_PUT(skb,TCA_ATM_EXCESS,sizeof(zero),&zero); } - rta->rta_len = skb->tail-b; + rta->rta_len = skb_tail_pointer(skb) - b; return skb->len; rtattr_failure: - skb_trim(skb,b-skb->data); + nlmsg_trim(skb, b); return -1; } static int diff -puN net/sched/sch_cbq.c~git-net net/sched/sch_cbq.c --- a/net/sched/sch_cbq.c~git-net +++ a/net/sched/sch_cbq.c @@ -29,6 +29,7 @@ #include #include #include +#include #include #include #include @@ -112,7 +113,7 @@ struct cbq_class /* Overlimit strategy parameters */ void (*overlimit)(struct cbq_class *cl); - long penalty; + psched_tdiff_t penalty; /* General scheduler (WRR) parameters */ long allot; @@ -143,7 +144,7 @@ struct cbq_class psched_time_t undertime; long avgidle; long deficit; /* Saved deficit for WRR */ - unsigned long penalized; + psched_time_t penalized; struct gnet_stats_basic bstats; struct gnet_stats_queue qstats; struct gnet_stats_rate_est rate_est; @@ -180,12 +181,12 @@ struct cbq_sched_data psched_time_t now_rt; /* Cached real time */ unsigned pmask; - struct timer_list delay_timer; - struct timer_list wd_timer; /* Watchdog timer, + struct hrtimer delay_timer; + struct qdisc_watchdog watchdog; /* Watchdog timer, started when CBQ has backlog, but cannot transmit just now */ - long wd_expires; + psched_tdiff_t wd_expires; int toplevel; u32 hgenerator; }; @@ -384,12 +385,12 @@ cbq_mark_toplevel(struct cbq_sched_data psched_time_t now; psched_tdiff_t incr; - PSCHED_GET_TIME(now); - incr = PSCHED_TDIFF(now, q->now_rt); - PSCHED_TADD2(q->now, incr, now); + now = psched_get_time(); + incr = now - q->now_rt; + now = q->now + incr; do { - if (PSCHED_TLESS(cl->undertime, now)) { + if (cl->undertime < now) { q->toplevel = cl->level; return; } @@ -473,7 +474,7 @@ cbq_requeue(struct sk_buff *skb, struct static void cbq_ovl_classic(struct cbq_class *cl) { struct cbq_sched_data *q = qdisc_priv(cl->qdisc); - psched_tdiff_t delay = PSCHED_TDIFF(cl->undertime, q->now); + psched_tdiff_t delay = cl->undertime - q->now; if (!cl->delayed) { delay += cl->offtime; @@ -491,7 +492,7 @@ static void cbq_ovl_classic(struct cbq_c cl->avgidle = cl->minidle; if (delay <= 0) delay = 1; - PSCHED_TADD2(q->now, delay, cl->undertime); + cl->undertime = q->now + delay; cl->xstats.overactions++; cl->delayed = 1; @@ -508,7 +509,7 @@ static void cbq_ovl_classic(struct cbq_c psched_tdiff_t base_delay = q->wd_expires; for (b = cl->borrow; b; b = b->borrow) { - delay = PSCHED_TDIFF(b->undertime, q->now); + delay = b->undertime - q->now; if (delay < base_delay) { if (delay <= 0) delay = 1; @@ -546,27 +547,32 @@ static void cbq_ovl_rclassic(struct cbq_ static void cbq_ovl_delay(struct cbq_class *cl) { struct cbq_sched_data *q = qdisc_priv(cl->qdisc); - psched_tdiff_t delay = PSCHED_TDIFF(cl->undertime, q->now); + psched_tdiff_t delay = cl->undertime - q->now; if (!cl->delayed) { - unsigned long sched = jiffies; + psched_time_t sched = q->now; + ktime_t expires; delay += cl->offtime; if (cl->avgidle < 0) delay -= (-cl->avgidle) - ((-cl->avgidle) >> cl->ewma_log); if (cl->avgidle < cl->minidle) cl->avgidle = cl->minidle; - PSCHED_TADD2(q->now, delay, cl->undertime); + cl->undertime = q->now + delay; if (delay > 0) { - sched += PSCHED_US2JIFFIE(delay) + cl->penalty; + sched += delay + cl->penalty; cl->penalized = sched; cl->cpriority = TC_CBQ_MAXPRIO; q->pmask |= (1<delay_timer) && - (long)(q->delay_timer.expires - sched) > 0) - q->delay_timer.expires = sched; - add_timer(&q->delay_timer); + + expires = ktime_set(0, 0); + expires = ktime_add_ns(expires, PSCHED_US2NS(sched)); + if (hrtimer_try_to_cancel(&q->delay_timer) && + ktime_to_ns(ktime_sub(q->delay_timer.expires, + expires)) > 0) + q->delay_timer.expires = expires; + hrtimer_restart(&q->delay_timer); cl->delayed = 1; cl->xstats.overactions++; return; @@ -583,7 +589,7 @@ static void cbq_ovl_lowprio(struct cbq_c { struct cbq_sched_data *q = qdisc_priv(cl->qdisc); - cl->penalized = jiffies + cl->penalty; + cl->penalized = q->now + cl->penalty; if (cl->cpriority != cl->priority2) { cl->cpriority = cl->priority2; @@ -604,27 +610,19 @@ static void cbq_ovl_drop(struct cbq_clas cbq_ovl_classic(cl); } -static void cbq_watchdog(unsigned long arg) -{ - struct Qdisc *sch = (struct Qdisc*)arg; - - sch->flags &= ~TCQ_F_THROTTLED; - netif_schedule(sch->dev); -} - -static unsigned long cbq_undelay_prio(struct cbq_sched_data *q, int prio) +static psched_tdiff_t cbq_undelay_prio(struct cbq_sched_data *q, int prio, + psched_time_t now) { struct cbq_class *cl; struct cbq_class *cl_prev = q->active[prio]; - unsigned long now = jiffies; - unsigned long sched = now; + psched_time_t sched = now; if (cl_prev == NULL) - return now; + return 0; do { cl = cl_prev->next_alive; - if ((long)(now - cl->penalized) > 0) { + if (now - cl->penalized > 0) { cl_prev->next_alive = cl->next_alive; cl->next_alive = NULL; cl->cpriority = cl->priority; @@ -640,30 +638,34 @@ static unsigned long cbq_undelay_prio(st } cl = cl_prev->next_alive; - } else if ((long)(sched - cl->penalized) > 0) + } else if (sched - cl->penalized > 0) sched = cl->penalized; } while ((cl_prev = cl) != q->active[prio]); - return (long)(sched - now); + return sched - now; } -static void cbq_undelay(unsigned long arg) +static enum hrtimer_restart cbq_undelay(struct hrtimer *timer) { - struct Qdisc *sch = (struct Qdisc*)arg; - struct cbq_sched_data *q = qdisc_priv(sch); - long delay = 0; + struct cbq_sched_data *q = container_of(timer, struct cbq_sched_data, + delay_timer); + struct Qdisc *sch = q->watchdog.qdisc; + psched_time_t now; + psched_tdiff_t delay = 0; unsigned pmask; + now = psched_get_time(); + pmask = q->pmask; q->pmask = 0; while (pmask) { int prio = ffz(~pmask); - long tmp; + psched_tdiff_t tmp; pmask &= ~(1< 0) { q->pmask |= 1<delay_timer.expires = jiffies + delay; - add_timer(&q->delay_timer); + ktime_t time; + + time = ktime_set(0, 0); + time = ktime_add_ns(time, PSCHED_US2NS(now + delay)); + hrtimer_start(&q->delay_timer, time, HRTIMER_MODE_ABS); } sch->flags &= ~TCQ_F_THROTTLED; netif_schedule(sch->dev); + return HRTIMER_NORESTART; } @@ -732,7 +738,7 @@ cbq_update_toplevel(struct cbq_sched_dat if (cl && q->toplevel >= borrowed->level) { if (cl->q->q.qlen > 1) { do { - if (PSCHED_IS_PASTPERFECT(borrowed->undertime)) { + if (borrowed->undertime == PSCHED_PASTPERFECT) { q->toplevel = borrowed->level; return; } @@ -770,7 +776,7 @@ cbq_update(struct cbq_sched_data *q) idle = (now - last) - last_pktlen/rate */ - idle = PSCHED_TDIFF(q->now, cl->last); + idle = q->now - cl->last; if ((unsigned long)idle > 128*1024*1024) { avgidle = cl->maxidle; } else { @@ -814,13 +820,11 @@ cbq_update(struct cbq_sched_data *q) idle -= L2T(&q->link, len); idle += L2T(cl, len); - PSCHED_AUDIT_TDIFF(idle); - - PSCHED_TADD2(q->now, idle, cl->undertime); + cl->undertime = q->now + idle; } else { /* Underlimit */ - PSCHED_SET_PASTPERFECT(cl->undertime); + cl->undertime = PSCHED_PASTPERFECT; if (avgidle > cl->maxidle) cl->avgidle = cl->maxidle; else @@ -841,8 +845,7 @@ cbq_under_limit(struct cbq_class *cl) if (cl->tparent == NULL) return cl; - if (PSCHED_IS_PASTPERFECT(cl->undertime) || - !PSCHED_TLESS(q->now, cl->undertime)) { + if (cl->undertime == PSCHED_PASTPERFECT || q->now >= cl->undertime) { cl->delayed = 0; return cl; } @@ -865,8 +868,7 @@ cbq_under_limit(struct cbq_class *cl) } if (cl->level > q->toplevel) return NULL; - } while (!PSCHED_IS_PASTPERFECT(cl->undertime) && - PSCHED_TLESS(q->now, cl->undertime)); + } while (cl->undertime != PSCHED_PASTPERFECT && q->now < cl->undertime); cl->delayed = 0; return cl; @@ -1001,8 +1003,8 @@ cbq_dequeue(struct Qdisc *sch) psched_time_t now; psched_tdiff_t incr; - PSCHED_GET_TIME(now); - incr = PSCHED_TDIFF(now, q->now_rt); + now = psched_get_time(); + incr = now - q->now_rt; if (q->tx_class) { psched_tdiff_t incr2; @@ -1014,12 +1016,12 @@ cbq_dequeue(struct Qdisc *sch) cbq_time = max(real_time, work); */ incr2 = L2T(&q->link, q->tx_len); - PSCHED_TADD(q->now, incr2); + q->now += incr2; cbq_update(q); if ((incr -= incr2) < 0) incr = 0; } - PSCHED_TADD(q->now, incr); + q->now += incr; q->now_rt = now; for (;;) { @@ -1051,11 +1053,11 @@ cbq_dequeue(struct Qdisc *sch) */ if (q->toplevel == TC_CBQ_MAXLEVEL && - PSCHED_IS_PASTPERFECT(q->link.undertime)) + q->link.undertime == PSCHED_PASTPERFECT) break; q->toplevel = TC_CBQ_MAXLEVEL; - PSCHED_SET_PASTPERFECT(q->link.undertime); + q->link.undertime = PSCHED_PASTPERFECT; } /* No packets in scheduler or nobody wants to give them to us :-( @@ -1063,13 +1065,9 @@ cbq_dequeue(struct Qdisc *sch) if (sch->q.qlen) { sch->qstats.overlimits++; - if (q->wd_expires) { - long delay = PSCHED_US2JIFFIE(q->wd_expires); - if (delay <= 0) - delay = 1; - mod_timer(&q->wd_timer, jiffies + delay); - sch->flags |= TCQ_F_THROTTLED; - } + if (q->wd_expires) + qdisc_watchdog_schedule(&q->watchdog, + now + q->wd_expires); } return NULL; } @@ -1276,10 +1274,10 @@ cbq_reset(struct Qdisc* sch) q->pmask = 0; q->tx_class = NULL; q->tx_borrowed = NULL; - del_timer(&q->wd_timer); - del_timer(&q->delay_timer); + qdisc_watchdog_cancel(&q->watchdog); + hrtimer_cancel(&q->delay_timer); q->toplevel = TC_CBQ_MAXLEVEL; - PSCHED_GET_TIME(q->now); + q->now = psched_get_time(); q->now_rt = q->now; for (prio = 0; prio <= TC_CBQ_MAXPRIO; prio++) @@ -1290,7 +1288,7 @@ cbq_reset(struct Qdisc* sch) qdisc_reset(cl->q); cl->next_alive = NULL; - PSCHED_SET_PASTPERFECT(cl->undertime); + cl->undertime = PSCHED_PASTPERFECT; cl->avgidle = cl->maxidle; cl->deficit = cl->quantum; cl->cpriority = cl->priority; @@ -1379,7 +1377,7 @@ static int cbq_set_overlimit(struct cbq_ default: return -EINVAL; } - cl->penalty = (ovl->penalty*HZ)/1000; + cl->penalty = ovl->penalty; return 0; } @@ -1446,14 +1444,11 @@ static int cbq_init(struct Qdisc *sch, s q->link.minidle = -0x7FFFFFFF; q->link.stats_lock = &sch->dev->queue_lock; - init_timer(&q->wd_timer); - q->wd_timer.data = (unsigned long)sch; - q->wd_timer.function = cbq_watchdog; - init_timer(&q->delay_timer); - q->delay_timer.data = (unsigned long)sch; + qdisc_watchdog_init(&q->watchdog, sch); + hrtimer_init(&q->delay_timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS); q->delay_timer.function = cbq_undelay; q->toplevel = TC_CBQ_MAXLEVEL; - PSCHED_GET_TIME(q->now); + q->now = psched_get_time(); q->now_rt = q->now; cbq_link_class(&q->link); @@ -1467,19 +1462,19 @@ static int cbq_init(struct Qdisc *sch, s static __inline__ int cbq_dump_rate(struct sk_buff *skb, struct cbq_class *cl) { - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); RTA_PUT(skb, TCA_CBQ_RATE, sizeof(cl->R_tab->rate), &cl->R_tab->rate); return skb->len; rtattr_failure: - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); return -1; } static __inline__ int cbq_dump_lss(struct sk_buff *skb, struct cbq_class *cl) { - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); struct tc_cbq_lssopt opt; opt.flags = 0; @@ -1498,13 +1493,13 @@ static __inline__ int cbq_dump_lss(struc return skb->len; rtattr_failure: - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); return -1; } static __inline__ int cbq_dump_wrr(struct sk_buff *skb, struct cbq_class *cl) { - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); struct tc_cbq_wrropt opt; opt.flags = 0; @@ -1516,30 +1511,30 @@ static __inline__ int cbq_dump_wrr(struc return skb->len; rtattr_failure: - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); return -1; } static __inline__ int cbq_dump_ovl(struct sk_buff *skb, struct cbq_class *cl) { - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); struct tc_cbq_ovl opt; opt.strategy = cl->ovl_strategy; opt.priority2 = cl->priority2+1; opt.pad = 0; - opt.penalty = (cl->penalty*1000)/HZ; + opt.penalty = cl->penalty; RTA_PUT(skb, TCA_CBQ_OVL_STRATEGY, sizeof(opt), &opt); return skb->len; rtattr_failure: - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); return -1; } static __inline__ int cbq_dump_fopt(struct sk_buff *skb, struct cbq_class *cl) { - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); struct tc_cbq_fopt opt; if (cl->split || cl->defmap) { @@ -1551,14 +1546,14 @@ static __inline__ int cbq_dump_fopt(stru return skb->len; rtattr_failure: - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); return -1; } #ifdef CONFIG_NET_CLS_POLICE static __inline__ int cbq_dump_police(struct sk_buff *skb, struct cbq_class *cl) { - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); struct tc_cbq_police opt; if (cl->police) { @@ -1570,7 +1565,7 @@ static __inline__ int cbq_dump_police(st return skb->len; rtattr_failure: - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); return -1; } #endif @@ -1592,18 +1587,18 @@ static int cbq_dump_attr(struct sk_buff static int cbq_dump(struct Qdisc *sch, struct sk_buff *skb) { struct cbq_sched_data *q = qdisc_priv(sch); - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); struct rtattr *rta; rta = (struct rtattr*)b; RTA_PUT(skb, TCA_OPTIONS, 0, NULL); if (cbq_dump_attr(skb, &q->link) < 0) goto rtattr_failure; - rta->rta_len = skb->tail - b; + rta->rta_len = skb_tail_pointer(skb) - b; return skb->len; rtattr_failure: - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); return -1; } @@ -1621,7 +1616,7 @@ cbq_dump_class(struct Qdisc *sch, unsign struct sk_buff *skb, struct tcmsg *tcm) { struct cbq_class *cl = (struct cbq_class*)arg; - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); struct rtattr *rta; if (cl->tparent) @@ -1635,11 +1630,11 @@ cbq_dump_class(struct Qdisc *sch, unsign RTA_PUT(skb, TCA_OPTIONS, 0, NULL); if (cbq_dump_attr(skb, cl) < 0) goto rtattr_failure; - rta->rta_len = skb->tail - b; + rta->rta_len = skb_tail_pointer(skb) - b; return skb->len; rtattr_failure: - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); return -1; } @@ -1654,8 +1649,8 @@ cbq_dump_class_stats(struct Qdisc *sch, cl->xstats.avgidle = cl->avgidle; cl->xstats.undertime = 0; - if (!PSCHED_IS_PASTPERFECT(cl->undertime)) - cl->xstats.undertime = PSCHED_TDIFF(cl->undertime, q->now); + if (cl->undertime != PSCHED_PASTPERFECT) + cl->xstats.undertime = cl->undertime - q->now; if (gnet_stats_copy_basic(d, &cl->bstats) < 0 || #ifdef CONFIG_NET_ESTIMATOR @@ -1722,23 +1717,13 @@ static unsigned long cbq_get(struct Qdis return 0; } -static void cbq_destroy_filters(struct cbq_class *cl) -{ - struct tcf_proto *tp; - - while ((tp = cl->filter_list) != NULL) { - cl->filter_list = tp->next; - tcf_destroy(tp); - } -} - static void cbq_destroy_class(struct Qdisc *sch, struct cbq_class *cl) { struct cbq_sched_data *q = qdisc_priv(sch); BUG_TRAP(!cl->filters); - cbq_destroy_filters(cl); + tcf_destroy_chain(cl->filter_list); qdisc_destroy(cl->q); qdisc_put_rtab(cl->R_tab); #ifdef CONFIG_NET_ESTIMATOR @@ -1765,7 +1750,7 @@ cbq_destroy(struct Qdisc* sch) */ for (h = 0; h < 16; h++) for (cl = q->classes[h]; cl; cl = cl->next) - cbq_destroy_filters(cl); + tcf_destroy_chain(cl->filter_list); for (h = 0; h < 16; h++) { struct cbq_class *next; diff -puN net/sched/sch_dsmark.c~git-net net/sched/sch_dsmark.c --- a/net/sched/sch_dsmark.c~git-net +++ a/net/sched/sch_dsmark.c @@ -216,17 +216,17 @@ static int dsmark_enqueue(struct sk_buff /* FIXME: Safe with non-linear skbs? --RR */ switch (skb->protocol) { case __constant_htons(ETH_P_IP): - skb->tc_index = ipv4_get_dsfield(skb->nh.iph) + skb->tc_index = ipv4_get_dsfield(ip_hdr(skb)) & ~INET_ECN_MASK; break; case __constant_htons(ETH_P_IPV6): - skb->tc_index = ipv6_get_dsfield(skb->nh.ipv6h) + skb->tc_index = ipv6_get_dsfield(ipv6_hdr(skb)) & ~INET_ECN_MASK; break; default: skb->tc_index = 0; break; - }; + } } if (TC_H_MAJ(skb->priority) == sch->handle) @@ -257,7 +257,7 @@ static int dsmark_enqueue(struct sk_buff if (p->default_index != NO_DEFAULT_INDEX) skb->tc_index = p->default_index; break; - }; + } } err = p->q->enqueue(skb,p->q); @@ -292,11 +292,11 @@ static struct sk_buff *dsmark_dequeue(st switch (skb->protocol) { case __constant_htons(ETH_P_IP): - ipv4_change_dsfield(skb->nh.iph, p->mask[index], + ipv4_change_dsfield(ip_hdr(skb), p->mask[index], p->value[index]); break; case __constant_htons(ETH_P_IPV6): - ipv6_change_dsfield(skb->nh.ipv6h, p->mask[index], + ipv6_change_dsfield(ipv6_hdr(skb), p->mask[index], p->value[index]); break; default: @@ -310,7 +310,7 @@ static struct sk_buff *dsmark_dequeue(st "unsupported protocol %d\n", ntohs(skb->protocol)); break; - }; + } return skb; } @@ -412,16 +412,10 @@ static void dsmark_reset(struct Qdisc *s static void dsmark_destroy(struct Qdisc *sch) { struct dsmark_qdisc_data *p = PRIV(sch); - struct tcf_proto *tp; DPRINTK("dsmark_destroy(sch %p,[qdisc %p])\n", sch, p); - while (p->filter_list) { - tp = p->filter_list; - p->filter_list = tp->next; - tcf_destroy(tp); - } - + tcf_destroy_chain(p->filter_list); qdisc_destroy(p->q); kfree(p->mask); } diff -puN net/sched/sch_generic.c~git-net net/sched/sch_generic.c --- a/net/sched/sch_generic.c~git-net +++ a/net/sched/sch_generic.c @@ -36,34 +36,27 @@ /* Main transmission queue. */ -/* Main qdisc structure lock. - - However, modifications - to data, participating in scheduling must be additionally - protected with dev->queue_lock spinlock. - - The idea is the following: - - enqueue, dequeue are serialized via top level device - spinlock dev->queue_lock. - - tree walking is protected by read_lock(qdisc_tree_lock) - and this lock is used only in process context. - - updates to tree are made only under rtnl semaphore, - hence this lock may be made without local bh disabling. - - qdisc_tree_lock must be grabbed BEFORE dev->queue_lock! +/* Modifications to data participating in scheduling must be protected with + * dev->queue_lock spinlock. + * + * The idea is the following: + * - enqueue, dequeue are serialized via top level device + * spinlock dev->queue_lock. + * - ingress filtering is serialized via top level device + * spinlock dev->ingress_lock. + * - updates to tree and tree walking are only done under the rtnl mutex. */ -DEFINE_RWLOCK(qdisc_tree_lock); void qdisc_lock_tree(struct net_device *dev) { - write_lock(&qdisc_tree_lock); spin_lock_bh(&dev->queue_lock); + spin_lock(&dev->ingress_lock); } void qdisc_unlock_tree(struct net_device *dev) { + spin_unlock(&dev->ingress_lock); spin_unlock_bh(&dev->queue_lock); - write_unlock(&qdisc_tree_lock); } /* @@ -442,7 +435,6 @@ struct Qdisc *qdisc_alloc(struct net_dev sch->dequeue = ops->dequeue; sch->dev = dev; dev_hold(dev); - sch->stats_lock = &dev->queue_lock; atomic_set(&sch->refcnt, 1); return sch; @@ -458,6 +450,7 @@ struct Qdisc * qdisc_create_dflt(struct sch = qdisc_alloc(dev, ops); if (IS_ERR(sch)) goto errout; + sch->stats_lock = &dev->queue_lock; sch->parent = parentid; if (!ops->init || ops->init(sch, NULL) == 0) @@ -528,15 +521,11 @@ void dev_activate(struct net_device *dev printk(KERN_INFO "%s: activation failed\n", dev->name); return; } - write_lock(&qdisc_tree_lock); list_add_tail(&qdisc->list, &dev->qdisc_list); - write_unlock(&qdisc_tree_lock); } else { qdisc = &noqueue_qdisc; } - write_lock(&qdisc_tree_lock); dev->qdisc_sleeping = qdisc; - write_unlock(&qdisc_tree_lock); } if (!netif_carrier_ok(dev)) diff -puN net/sched/sch_hfsc.c~git-net net/sched/sch_hfsc.c --- a/net/sched/sch_hfsc.c~git-net +++ a/net/sched/sch_hfsc.c @@ -59,13 +59,13 @@ #include #include #include -#include #include #include #include #include #include #include +#include #include #include #include @@ -192,23 +192,9 @@ struct hfsc_sched struct list_head droplist; /* active leaf class list (for dropping) */ struct sk_buff_head requeue; /* requeued packet */ - struct timer_list wd_timer; /* watchdog timer */ + struct qdisc_watchdog watchdog; /* watchdog timer */ }; -/* - * macros - */ -#ifdef CONFIG_NET_SCH_CLK_GETTIMEOFDAY -#include -#undef PSCHED_GET_TIME -#define PSCHED_GET_TIME(stamp) \ -do { \ - struct timeval tv; \ - do_gettimeofday(&tv); \ - (stamp) = 1ULL * USEC_PER_SEC * tv.tv_sec + tv.tv_usec; \ -} while (0) -#endif - #define HT_INFINITY 0xffffffffffffffffULL /* infinite time value */ @@ -394,28 +380,17 @@ cftree_update(struct hfsc_class *cl) * ism: (psched_us/byte) << ISM_SHIFT * dx: psched_us * - * Clock source resolution (CONFIG_NET_SCH_CLK_*) - * JIFFIES: for 48<=HZ<=1534 resolution is between 0.63us and 1.27us. - * CPU: resolution is between 0.5us and 1us. - * GETTIMEOFDAY: resolution is exactly 1us. + * The clock source resolution with ktime is 1.024us. * * sm and ism are scaled in order to keep effective digits. * SM_SHIFT and ISM_SHIFT are selected to keep at least 4 effective * digits in decimal using the following table. * - * Note: We can afford the additional accuracy (altq hfsc keeps at most - * 3 effective digits) thanks to the fact that linux clock is bounded - * much more tightly. - * * bits/sec 100Kbps 1Mbps 10Mbps 100Mbps 1Gbps * ------------+------------------------------------------------------- - * bytes/0.5us 6.25e-3 62.5e-3 625e-3 6250e-e 62500e-3 - * bytes/us 12.5e-3 125e-3 1250e-3 12500e-3 125000e-3 - * bytes/1.27us 15.875e-3 158.75e-3 1587.5e-3 15875e-3 158750e-3 + * bytes/1.024us 12.8e-3 128e-3 1280e-3 12800e-3 128000e-3 * - * 0.5us/byte 160 16 1.6 0.16 0.016 - * us/byte 80 8 0.8 0.08 0.008 - * 1.27us/byte 63 6.3 0.63 0.063 0.0063 + * 1.024us/byte 78.125 7.8125 0.78125 0.078125 0.0078125 */ #define SM_SHIFT 20 #define ISM_SHIFT 18 @@ -460,8 +435,8 @@ m2sm(u32 m) u64 sm; sm = ((u64)m << SM_SHIFT); - sm += PSCHED_JIFFIE2US(HZ) - 1; - do_div(sm, PSCHED_JIFFIE2US(HZ)); + sm += PSCHED_TICKS_PER_SEC - 1; + do_div(sm, PSCHED_TICKS_PER_SEC); return sm; } @@ -474,7 +449,7 @@ m2ism(u32 m) if (m == 0) ism = HT_INFINITY; else { - ism = ((u64)PSCHED_JIFFIE2US(HZ) << ISM_SHIFT); + ism = ((u64)PSCHED_TICKS_PER_SEC << ISM_SHIFT); ism += m - 1; do_div(ism, m); } @@ -487,7 +462,7 @@ d2dx(u32 d) { u64 dx; - dx = ((u64)d * PSCHED_JIFFIE2US(HZ)); + dx = ((u64)d * PSCHED_TICKS_PER_SEC); dx += USEC_PER_SEC - 1; do_div(dx, USEC_PER_SEC); return dx; @@ -499,7 +474,7 @@ sm2m(u64 sm) { u64 m; - m = (sm * PSCHED_JIFFIE2US(HZ)) >> SM_SHIFT; + m = (sm * PSCHED_TICKS_PER_SEC) >> SM_SHIFT; return (u32)m; } @@ -510,7 +485,7 @@ dx2d(u64 dx) u64 d; d = dx * USEC_PER_SEC; - do_div(d, PSCHED_JIFFIE2US(HZ)); + do_div(d, PSCHED_TICKS_PER_SEC); return (u32)d; } @@ -654,9 +629,7 @@ rtsc_min(struct runtime_sc *rtsc, struct static void init_ed(struct hfsc_class *cl, unsigned int next_len) { - u64 cur_time; - - PSCHED_GET_TIME(cur_time); + u64 cur_time = psched_get_time(); /* update the deadline curve */ rtsc_min(&cl->cl_deadline, &cl->cl_rsc, cur_time, cl->cl_cumul); @@ -779,7 +752,7 @@ init_vf(struct hfsc_class *cl, unsigned if (cl->cl_flags & HFSC_USC) { /* class has upper limit curve */ if (cur_time == 0) - PSCHED_GET_TIME(cur_time); + cur_time = psched_get_time(); /* update the ulimit curve */ rtsc_min(&cl->cl_ulimit, &cl->cl_usc, cur_time, @@ -1063,7 +1036,7 @@ hfsc_change_class(struct Qdisc *sch, u32 if (cl->cl_parent == NULL && parentid != TC_H_ROOT) return -EINVAL; } - PSCHED_GET_TIME(cur_time); + cur_time = psched_get_time(); sch_tree_lock(sch); if (rsc != NULL) @@ -1149,22 +1122,11 @@ hfsc_change_class(struct Qdisc *sch, u32 } static void -hfsc_destroy_filters(struct tcf_proto **fl) -{ - struct tcf_proto *tp; - - while ((tp = *fl) != NULL) { - *fl = tp->next; - tcf_destroy(tp); - } -} - -static void hfsc_destroy_class(struct Qdisc *sch, struct hfsc_class *cl) { struct hfsc_sched *q = qdisc_priv(sch); - hfsc_destroy_filters(&cl->filter_list); + tcf_destroy_chain(cl->filter_list); qdisc_destroy(cl->qdisc); #ifdef CONFIG_NET_ESTIMATOR gen_kill_estimator(&cl->bstats, &cl->rate_est); @@ -1389,7 +1351,7 @@ hfsc_dump_class(struct Qdisc *sch, unsig struct tcmsg *tcm) { struct hfsc_class *cl = (struct hfsc_class *)arg; - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); struct rtattr *rta = (struct rtattr *)b; tcm->tcm_parent = cl->cl_parent ? cl->cl_parent->classid : TC_H_ROOT; @@ -1400,11 +1362,11 @@ hfsc_dump_class(struct Qdisc *sch, unsig RTA_PUT(skb, TCA_OPTIONS, 0, NULL); if (hfsc_dump_curves(skb, cl) < 0) goto rtattr_failure; - rta->rta_len = skb->tail - b; + rta->rta_len = skb_tail_pointer(skb) - b; return skb->len; rtattr_failure: - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); return -1; } @@ -1459,21 +1421,11 @@ hfsc_walk(struct Qdisc *sch, struct qdis } static void -hfsc_watchdog(unsigned long arg) -{ - struct Qdisc *sch = (struct Qdisc *)arg; - - sch->flags &= ~TCQ_F_THROTTLED; - netif_schedule(sch->dev); -} - -static void -hfsc_schedule_watchdog(struct Qdisc *sch, u64 cur_time) +hfsc_schedule_watchdog(struct Qdisc *sch) { struct hfsc_sched *q = qdisc_priv(sch); struct hfsc_class *cl; u64 next_time = 0; - long delay; if ((cl = eltree_get_minel(q)) != NULL) next_time = cl->cl_e; @@ -1482,11 +1434,7 @@ hfsc_schedule_watchdog(struct Qdisc *sch next_time = q->root.cl_cfmin; } WARN_ON(next_time == 0); - delay = next_time - cur_time; - delay = PSCHED_US2JIFFIE(delay); - - sch->flags |= TCQ_F_THROTTLED; - mod_timer(&q->wd_timer, jiffies + delay); + qdisc_watchdog_schedule(&q->watchdog, next_time); } static int @@ -1523,9 +1471,7 @@ hfsc_init_qdisc(struct Qdisc *sch, struc list_add(&q->root.hlist, &q->clhash[hfsc_hash(q->root.classid)]); - init_timer(&q->wd_timer); - q->wd_timer.function = hfsc_watchdog; - q->wd_timer.data = (unsigned long)sch; + qdisc_watchdog_init(&q->watchdog, sch); return 0; } @@ -1595,8 +1541,7 @@ hfsc_reset_qdisc(struct Qdisc *sch) __skb_queue_purge(&q->requeue); q->eligible = RB_ROOT; INIT_LIST_HEAD(&q->droplist); - del_timer(&q->wd_timer); - sch->flags &= ~TCQ_F_THROTTLED; + qdisc_watchdog_cancel(&q->watchdog); sch->q.qlen = 0; } @@ -1612,14 +1557,14 @@ hfsc_destroy_qdisc(struct Qdisc *sch) hfsc_destroy_class(sch, cl); } __skb_queue_purge(&q->requeue); - del_timer(&q->wd_timer); + qdisc_watchdog_cancel(&q->watchdog); } static int hfsc_dump_qdisc(struct Qdisc *sch, struct sk_buff *skb) { struct hfsc_sched *q = qdisc_priv(sch); - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); struct tc_hfsc_qopt qopt; qopt.defcls = q->defcls; @@ -1627,7 +1572,7 @@ hfsc_dump_qdisc(struct Qdisc *sch, struc return skb->len; rtattr_failure: - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); return -1; } @@ -1681,7 +1626,7 @@ hfsc_dequeue(struct Qdisc *sch) if ((skb = __skb_dequeue(&q->requeue))) goto out; - PSCHED_GET_TIME(cur_time); + cur_time = psched_get_time(); /* * if there are eligible classes, use real-time criteria. @@ -1698,7 +1643,7 @@ hfsc_dequeue(struct Qdisc *sch) cl = vttree_get_minvt(&q->root, cur_time); if (cl == NULL) { sch->qstats.overlimits++; - hfsc_schedule_watchdog(sch, cur_time); + hfsc_schedule_watchdog(sch); return NULL; } } diff -puN net/sched/sch_htb.c~git-net net/sched/sch_htb.c --- a/net/sched/sch_htb.c~git-net +++ a/net/sched/sch_htb.c @@ -50,6 +50,7 @@ #include #include #include +#include #include #include #include @@ -128,7 +129,7 @@ struct htb_class { } un; struct rb_node node[TC_HTB_NUMPRIO]; /* node for self or feed tree */ struct rb_node pq_node; /* node for event queue */ - unsigned long pq_key; /* the same type as jiffies global */ + psched_time_t pq_key; int prio_activity; /* for which prios are we active */ enum htb_cmode cmode; /* current mode of the class */ @@ -179,10 +180,7 @@ struct htb_sched { struct rb_root wait_pq[TC_HTB_MAXDEPTH]; /* time of nearest event per level (row) */ - unsigned long near_ev_cache[TC_HTB_MAXDEPTH]; - - /* cached value of jiffies in dequeue */ - unsigned long jiffies; + psched_time_t near_ev_cache[TC_HTB_MAXDEPTH]; /* whether we hit non-work conserving class during this dequeue; we use */ int nwc_hit; /* this to disable mindelay complaint in dequeue */ @@ -195,7 +193,7 @@ struct htb_sched { int rate2quantum; /* quant = rate / rate2quantum */ psched_time_t now; /* cached dequeue time */ - struct timer_list timer; /* send delay timer */ + struct qdisc_watchdog watchdog; #ifdef HTB_RATECM struct timer_list rttim; /* rate computer timer */ int recmp_bucket; /* which hash bucket to recompute next */ @@ -342,19 +340,19 @@ static void htb_add_to_wait_tree(struct { struct rb_node **p = &q->wait_pq[cl->level].rb_node, *parent = NULL; - cl->pq_key = q->jiffies + PSCHED_US2JIFFIE(delay); - if (cl->pq_key == q->jiffies) + cl->pq_key = q->now + delay; + if (cl->pq_key == q->now) cl->pq_key++; /* update the nearest event cache */ - if (time_after(q->near_ev_cache[cl->level], cl->pq_key)) + if (q->near_ev_cache[cl->level] > cl->pq_key) q->near_ev_cache[cl->level] = cl->pq_key; while (*p) { struct htb_class *c; parent = *p; c = rb_entry(parent, struct htb_class, pq_node); - if (time_after_eq(cl->pq_key, c->pq_key)) + if (cl->pq_key >= c->pq_key) p = &parent->rb_right; else p = &parent->rb_left; @@ -679,14 +677,6 @@ static int htb_requeue(struct sk_buff *s return NET_XMIT_SUCCESS; } -static void htb_timer(unsigned long arg) -{ - struct Qdisc *sch = (struct Qdisc *)arg; - sch->flags &= ~TCQ_F_THROTTLED; - wmb(); - netif_schedule(sch->dev); -} - #ifdef HTB_RATECM #define RT_GEN(D,R) R+=D-(R/HTB_EWMAC);D=0 static void htb_rate_timer(unsigned long arg) @@ -739,7 +729,7 @@ static void htb_charge_class(struct htb_ cl->T = toks while (cl) { - diff = PSCHED_TDIFF_SAFE(q->now, cl->t_c, (u32) cl->mbuffer); + diff = psched_tdiff_bounded(q->now, cl->t_c, cl->mbuffer); if (cl->level >= level) { if (cl->level == level) cl->xstats.lends++; @@ -778,11 +768,11 @@ static void htb_charge_class(struct htb_ /** * htb_do_events - make mode changes to classes at the level * - * Scans event queue for pending events and applies them. Returns jiffies to + * Scans event queue for pending events and applies them. Returns time of * next pending event (0 for no event in pq). - * Note: Aplied are events whose have cl->pq_key <= jiffies. + * Note: Applied are events whose have cl->pq_key <= q->now. */ -static long htb_do_events(struct htb_sched *q, int level) +static psched_time_t htb_do_events(struct htb_sched *q, int level) { int i; @@ -795,18 +785,18 @@ static long htb_do_events(struct htb_sch return 0; cl = rb_entry(p, struct htb_class, pq_node); - if (time_after(cl->pq_key, q->jiffies)) { - return cl->pq_key - q->jiffies; - } + if (cl->pq_key > q->now) + return cl->pq_key; + htb_safe_rb_erase(p, q->wait_pq + level); - diff = PSCHED_TDIFF_SAFE(q->now, cl->t_c, (u32) cl->mbuffer); + diff = psched_tdiff_bounded(q->now, cl->t_c, cl->mbuffer); htb_change_class_mode(q, cl, &diff); if (cl->cmode != HTB_CAN_SEND) htb_add_to_wait_tree(q, cl, diff); } if (net_ratelimit()) printk(KERN_WARNING "htb: too many events !\n"); - return HZ / 10; + return q->now + PSCHED_TICKS_PER_SEC / 10; } /* Returns class->node+prio from id-tree where classe's id is >= id. NULL @@ -958,30 +948,12 @@ next: return skb; } -static void htb_delay_by(struct Qdisc *sch, long delay) -{ - struct htb_sched *q = qdisc_priv(sch); - if (delay <= 0) - delay = 1; - if (unlikely(delay > 5 * HZ)) { - if (net_ratelimit()) - printk(KERN_INFO "HTB delay %ld > 5sec\n", delay); - delay = 5 * HZ; - } - /* why don't use jiffies here ? because expires can be in past */ - mod_timer(&q->timer, q->jiffies + delay); - sch->flags |= TCQ_F_THROTTLED; - sch->qstats.overlimits++; -} - static struct sk_buff *htb_dequeue(struct Qdisc *sch) { struct sk_buff *skb = NULL; struct htb_sched *q = qdisc_priv(sch); int level; - long min_delay; - - q->jiffies = jiffies; + psched_time_t next_event; /* try to dequeue direct packets as high prio (!) to minimize cpu work */ skb = __skb_dequeue(&q->direct_queue); @@ -993,23 +965,25 @@ static struct sk_buff *htb_dequeue(struc if (!sch->q.qlen) goto fin; - PSCHED_GET_TIME(q->now); + q->now = psched_get_time(); - min_delay = LONG_MAX; + next_event = q->now + 5 * PSCHED_TICKS_PER_SEC; q->nwc_hit = 0; for (level = 0; level < TC_HTB_MAXDEPTH; level++) { /* common case optimization - skip event handler quickly */ int m; - long delay; - if (time_after_eq(q->jiffies, q->near_ev_cache[level])) { - delay = htb_do_events(q, level); - q->near_ev_cache[level] = - q->jiffies + (delay ? delay : HZ); + psched_time_t event; + + if (q->now >= q->near_ev_cache[level]) { + event = htb_do_events(q, level); + q->near_ev_cache[level] = event ? event : + PSCHED_TICKS_PER_SEC; } else - delay = q->near_ev_cache[level] - q->jiffies; + event = q->near_ev_cache[level]; + + if (event && next_event > event) + next_event = event; - if (delay && min_delay > delay) - min_delay = delay; m = ~q->row_mask[level]; while (m != (int)(-1)) { int prio = ffz(m); @@ -1022,7 +996,8 @@ static struct sk_buff *htb_dequeue(struc } } } - htb_delay_by(sch, min_delay > 5 * HZ ? 5 * HZ : min_delay); + sch->qstats.overlimits++; + qdisc_watchdog_schedule(&q->watchdog, next_event); fin: return skb; } @@ -1075,8 +1050,7 @@ static void htb_reset(struct Qdisc *sch) } } - sch->flags &= ~TCQ_F_THROTTLED; - del_timer(&q->timer); + qdisc_watchdog_cancel(&q->watchdog); __skb_queue_purge(&q->direct_queue); sch->q.qlen = 0; memset(q->row, 0, sizeof(q->row)); @@ -1113,14 +1087,12 @@ static int htb_init(struct Qdisc *sch, s for (i = 0; i < TC_HTB_NUMPRIO; i++) INIT_LIST_HEAD(q->drops + i); - init_timer(&q->timer); + qdisc_watchdog_init(&q->watchdog, sch); skb_queue_head_init(&q->direct_queue); q->direct_qlen = sch->dev->tx_queue_len; if (q->direct_qlen < 2) /* some devices have zero tx_queue_len */ q->direct_qlen = 2; - q->timer.function = htb_timer; - q->timer.data = (unsigned long)sch; #ifdef HTB_RATECM init_timer(&q->rttim); @@ -1139,7 +1111,7 @@ static int htb_init(struct Qdisc *sch, s static int htb_dump(struct Qdisc *sch, struct sk_buff *skb) { struct htb_sched *q = qdisc_priv(sch); - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); struct rtattr *rta; struct tc_htb_glob gopt; spin_lock_bh(&sch->dev->queue_lock); @@ -1152,12 +1124,12 @@ static int htb_dump(struct Qdisc *sch, s rta = (struct rtattr *)b; RTA_PUT(skb, TCA_OPTIONS, 0, NULL); RTA_PUT(skb, TCA_HTB_INIT, sizeof(gopt), &gopt); - rta->rta_len = skb->tail - b; + rta->rta_len = skb_tail_pointer(skb) - b; spin_unlock_bh(&sch->dev->queue_lock); return skb->len; rtattr_failure: spin_unlock_bh(&sch->dev->queue_lock); - skb_trim(skb, skb->tail - skb->data); + nlmsg_trim(skb, skb_tail_pointer(skb)); return -1; } @@ -1165,7 +1137,7 @@ static int htb_dump_class(struct Qdisc * struct sk_buff *skb, struct tcmsg *tcm) { struct htb_class *cl = (struct htb_class *)arg; - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); struct rtattr *rta; struct tc_htb_opt opt; @@ -1188,12 +1160,12 @@ static int htb_dump_class(struct Qdisc * opt.prio = cl->un.leaf.prio; opt.level = cl->level; RTA_PUT(skb, TCA_HTB_PARMS, sizeof(opt), &opt); - rta->rta_len = skb->tail - b; + rta->rta_len = skb_tail_pointer(skb) - b; spin_unlock_bh(&sch->dev->queue_lock); return skb->len; rtattr_failure: spin_unlock_bh(&sch->dev->queue_lock); - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); return -1; } @@ -1264,16 +1236,6 @@ static unsigned long htb_get(struct Qdis return (unsigned long)cl; } -static void htb_destroy_filters(struct tcf_proto **fl) -{ - struct tcf_proto *tp; - - while ((tp = *fl) != NULL) { - *fl = tp->next; - tcf_destroy(tp); - } -} - static inline int htb_parent_last_child(struct htb_class *cl) { if (!cl->parent) @@ -1302,7 +1264,7 @@ static void htb_parent_to_leaf(struct ht parent->un.leaf.prio = parent->prio; parent->tokens = parent->buffer; parent->ctokens = parent->cbuffer; - PSCHED_GET_TIME(parent->t_c); + parent->t_c = psched_get_time(); parent->cmode = HTB_CAN_SEND; } @@ -1317,7 +1279,7 @@ static void htb_destroy_class(struct Qdi qdisc_put_rtab(cl->rate); qdisc_put_rtab(cl->ceil); - htb_destroy_filters(&cl->filter_list); + tcf_destroy_chain(cl->filter_list); while (!list_empty(&cl->children)) htb_destroy_class(sch, list_entry(cl->children.next, @@ -1341,7 +1303,7 @@ static void htb_destroy(struct Qdisc *sc { struct htb_sched *q = qdisc_priv(sch); - del_timer_sync(&q->timer); + qdisc_watchdog_cancel(&q->watchdog); #ifdef HTB_RATECM del_timer_sync(&q->rttim); #endif @@ -1349,7 +1311,7 @@ static void htb_destroy(struct Qdisc *sc and surprisingly it worked in 2.4. But it must precede it because filter need its target class alive to be able to call unbind_filter on it (without Oops). */ - htb_destroy_filters(&q->filter_list); + tcf_destroy_chain(q->filter_list); while (!list_empty(&q->root)) htb_destroy_class(sch, list_entry(q->root.next, @@ -1498,8 +1460,8 @@ static int htb_change_class(struct Qdisc /* set class to be in HTB_CAN_SEND state */ cl->tokens = hopt->buffer; cl->ctokens = hopt->cbuffer; - cl->mbuffer = PSCHED_JIFFIE2US(HZ * 60); /* 1min */ - PSCHED_GET_TIME(cl->t_c); + cl->mbuffer = 60 * PSCHED_TICKS_PER_SEC; /* 1min */ + cl->t_c = psched_get_time(); cl->cmode = HTB_CAN_SEND; /* attach to the hash list and parent's family */ diff -puN net/sched/sch_ingress.c~git-net net/sched/sch_ingress.c --- a/net/sched/sch_ingress.c~git-net +++ a/net/sched/sch_ingress.c @@ -16,6 +16,7 @@ #include #include #include +#include #include #include #include @@ -169,7 +170,7 @@ static int ingress_enqueue(struct sk_buf skb->tc_index = TC_H_MIN(res.classid); result = TC_ACT_OK; break; - }; + } /* backward compat */ #else #ifdef CONFIG_NET_CLS_POLICE @@ -186,7 +187,7 @@ static int ingress_enqueue(struct sk_buf sch->bstats.bytes += skb->len; result = NF_ACCEPT; break; - }; + } #else D2PRINTK("Overriding result to ACCEPT\n"); @@ -247,16 +248,11 @@ ing_hook(unsigned int hook, struct sk_bu skb->dev ? (*pskb)->dev->name : "(no dev)", skb->len); -/* -revisit later: Use a private since lock dev->queue_lock is also -used on the egress (might slow things for an iota) -*/ - if (dev->qdisc_ingress) { - spin_lock(&dev->queue_lock); + spin_lock(&dev->ingress_lock); if ((q = dev->qdisc_ingress) != NULL) fwres = q->enqueue(skb, q); - spin_unlock(&dev->queue_lock); + spin_unlock(&dev->ingress_lock); } return fwres; @@ -345,14 +341,9 @@ static void ingress_reset(struct Qdisc * static void ingress_destroy(struct Qdisc *sch) { struct ingress_qdisc_data *p = PRIV(sch); - struct tcf_proto *tp; DPRINTK("ingress_destroy(sch %p,[qdisc %p])\n", sch, p); - while (p->filter_list) { - tp = p->filter_list; - p->filter_list = tp->next; - tcf_destroy(tp); - } + tcf_destroy_chain(p->filter_list); #if 0 /* for future use */ qdisc_destroy(p->q); @@ -362,16 +353,16 @@ static void ingress_destroy(struct Qdisc static int ingress_dump(struct Qdisc *sch, struct sk_buff *skb) { - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); struct rtattr *rta; rta = (struct rtattr *) b; RTA_PUT(skb, TCA_OPTIONS, 0, NULL); - rta->rta_len = skb->tail - b; + rta->rta_len = skb_tail_pointer(skb) - b; return skb->len; rtattr_failure: - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); return -1; } diff -puN net/sched/sch_netem.c~git-net net/sched/sch_netem.c --- a/net/sched/sch_netem.c~git-net +++ a/net/sched/sch_netem.c @@ -22,6 +22,7 @@ #include #include +#include #include #define VERSION "1.2" @@ -54,21 +55,22 @@ struct netem_sched_data { struct Qdisc *qdisc; - struct timer_list timer; + struct qdisc_watchdog watchdog; + + psched_tdiff_t latency; + psched_tdiff_t jitter; - u32 latency; u32 loss; u32 limit; u32 counter; u32 gap; - u32 jitter; u32 duplicate; u32 reorder; u32 corrupt; struct crndstate { - unsigned long last; - unsigned long rho; + u32 last; + u32 rho; } delay_cor, loss_cor, dup_cor, reorder_cor, corrupt_cor; struct disttable { @@ -95,12 +97,12 @@ static void init_crandom(struct crndstat * Next number depends on last value. * rho is scaled to avoid floating point. */ -static unsigned long get_crandom(struct crndstate *state) +static u32 get_crandom(struct crndstate *state) { u64 value, rho; unsigned long answer; - if (state->rho == 0) /* no correllation */ + if (state->rho == 0) /* no correlation */ return net_random(); value = net_random(); @@ -114,11 +116,13 @@ static unsigned long get_crandom(struct * std deviation sigma. Uses table lookup to approximate the desired * distribution, and a uniformly-distributed pseudo-random source. */ -static long tabledist(unsigned long mu, long sigma, - struct crndstate *state, const struct disttable *dist) -{ - long t, x; - unsigned long rnd; +static psched_tdiff_t tabledist(psched_tdiff_t mu, psched_tdiff_t sigma, + struct crndstate *state, + const struct disttable *dist) +{ + psched_tdiff_t x; + long t; + u32 rnd; if (sigma == 0) return mu; @@ -213,8 +217,8 @@ static int netem_enqueue(struct sk_buff delay = tabledist(q->latency, q->jitter, &q->delay_cor, q->delay_dist); - PSCHED_GET_TIME(now); - PSCHED_TADD2(now, delay, cb->time_to_send); + now = psched_get_time(); + cb->time_to_send = now + delay; ++q->counter; ret = q->qdisc->enqueue(skb, q->qdisc); } else { @@ -222,7 +226,7 @@ static int netem_enqueue(struct sk_buff * Do re-ordering by putting one out of N packets at the front * of the queue. */ - PSCHED_GET_TIME(cb->time_to_send); + cb->time_to_send = psched_get_time(); q->counter = 0; ret = q->qdisc->ops->requeue(skb, q->qdisc); } @@ -269,55 +273,43 @@ static struct sk_buff *netem_dequeue(str struct netem_sched_data *q = qdisc_priv(sch); struct sk_buff *skb; + smp_mb(); + if (sch->flags & TCQ_F_THROTTLED) + return NULL; + skb = q->qdisc->dequeue(q->qdisc); if (skb) { const struct netem_skb_cb *cb = (const struct netem_skb_cb *)skb->cb; - psched_time_t now; + psched_time_t now = psched_get_time(); /* if more time remaining? */ - PSCHED_GET_TIME(now); - - if (PSCHED_TLESS(cb->time_to_send, now)) { + if (cb->time_to_send <= now) { pr_debug("netem_dequeue: return skb=%p\n", skb); sch->q.qlen--; - sch->flags &= ~TCQ_F_THROTTLED; return skb; - } else { - psched_tdiff_t delay = PSCHED_TDIFF(cb->time_to_send, now); - - if (q->qdisc->ops->requeue(skb, q->qdisc) != NET_XMIT_SUCCESS) { - qdisc_tree_decrease_qlen(q->qdisc, 1); - sch->qstats.drops++; - printk(KERN_ERR "netem: queue discpline %s could not requeue\n", - q->qdisc->ops->id); - } + } - mod_timer(&q->timer, jiffies + PSCHED_US2JIFFIE(delay)); - sch->flags |= TCQ_F_THROTTLED; + if (unlikely(q->qdisc->ops->requeue(skb, q->qdisc) != NET_XMIT_SUCCESS)) { + qdisc_tree_decrease_qlen(q->qdisc, 1); + sch->qstats.drops++; + printk(KERN_ERR "netem: %s could not requeue\n", + q->qdisc->ops->id); } + + qdisc_watchdog_schedule(&q->watchdog, cb->time_to_send); } return NULL; } -static void netem_watchdog(unsigned long arg) -{ - struct Qdisc *sch = (struct Qdisc *)arg; - - pr_debug("netem_watchdog qlen=%d\n", sch->q.qlen); - sch->flags &= ~TCQ_F_THROTTLED; - netif_schedule(sch->dev); -} - static void netem_reset(struct Qdisc *sch) { struct netem_sched_data *q = qdisc_priv(sch); qdisc_reset(q->qdisc); sch->q.qlen = 0; - sch->flags &= ~TCQ_F_THROTTLED; - del_timer_sync(&q->timer); + qdisc_watchdog_cancel(&q->watchdog); } /* Pass size change message down to embedded FIFO */ @@ -438,10 +430,11 @@ static int netem_change(struct Qdisc *sc q->loss = qopt->loss; q->duplicate = qopt->duplicate; - /* for compatiablity with earlier versions. - * if gap is set, need to assume 100% probablity + /* for compatibility with earlier versions. + * if gap is set, need to assume 100% probability */ - q->reorder = ~0; + if (q->gap) + q->reorder = ~0; /* Handle nested options after initial queue options. * Should have put all options in nested format but too late now. @@ -487,22 +480,28 @@ static int netem_change(struct Qdisc *sc */ struct fifo_sched_data { u32 limit; + psched_time_t oldest; }; static int tfifo_enqueue(struct sk_buff *nskb, struct Qdisc *sch) { struct fifo_sched_data *q = qdisc_priv(sch); struct sk_buff_head *list = &sch->q; - const struct netem_skb_cb *ncb - = (const struct netem_skb_cb *)nskb->cb; + psched_time_t tnext = ((struct netem_skb_cb *)nskb->cb)->time_to_send; struct sk_buff *skb; if (likely(skb_queue_len(list) < q->limit)) { + /* Optimize for add at tail */ + if (likely(skb_queue_empty(list) || tnext >= q->oldest)) { + q->oldest = tnext; + return qdisc_enqueue_tail(nskb, sch); + } + skb_queue_reverse_walk(list, skb) { const struct netem_skb_cb *cb = (const struct netem_skb_cb *)skb->cb; - if (!PSCHED_TLESS(ncb->time_to_send, cb->time_to_send)) + if (tnext >= cb->time_to_send) break; } @@ -515,7 +514,7 @@ static int tfifo_enqueue(struct sk_buff return NET_XMIT_SUCCESS; } - return qdisc_drop(nskb, sch); + return qdisc_reshape_fail(nskb, sch); } static int tfifo_init(struct Qdisc *sch, struct rtattr *opt) @@ -531,6 +530,7 @@ static int tfifo_init(struct Qdisc *sch, } else q->limit = max_t(u32, sch->dev->tx_queue_len, 1); + q->oldest = PSCHED_PASTPERFECT; return 0; } @@ -567,9 +567,7 @@ static int netem_init(struct Qdisc *sch, if (!opt) return -EINVAL; - init_timer(&q->timer); - q->timer.function = netem_watchdog; - q->timer.data = (unsigned long) sch; + qdisc_watchdog_init(&q->watchdog, sch); q->qdisc = qdisc_create_dflt(sch->dev, &tfifo_qdisc_ops, TC_H_MAKE(sch->handle, 1)); @@ -590,7 +588,7 @@ static void netem_destroy(struct Qdisc * { struct netem_sched_data *q = qdisc_priv(sch); - del_timer_sync(&q->timer); + qdisc_watchdog_cancel(&q->watchdog); qdisc_destroy(q->qdisc); kfree(q->delay_dist); } @@ -598,7 +596,7 @@ static void netem_destroy(struct Qdisc * static int netem_dump(struct Qdisc *sch, struct sk_buff *skb) { const struct netem_sched_data *q = qdisc_priv(sch); - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); struct rtattr *rta = (struct rtattr *) b; struct tc_netem_qopt qopt; struct tc_netem_corr cor; @@ -626,12 +624,12 @@ static int netem_dump(struct Qdisc *sch, corrupt.correlation = q->corrupt_cor.rho; RTA_PUT(skb, TCA_NETEM_CORRUPT, sizeof(corrupt), &corrupt); - rta->rta_len = skb->tail - b; + rta->rta_len = skb_tail_pointer(skb) - b; return skb->len; rtattr_failure: - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); return -1; } diff -puN net/sched/sch_prio.c~git-net net/sched/sch_prio.c --- a/net/sched/sch_prio.c~git-net +++ a/net/sched/sch_prio.c @@ -32,6 +32,7 @@ #include #include #include +#include #include #include @@ -61,7 +62,7 @@ prio_classify(struct sk_buff *skb, struc *qerr = NET_XMIT_SUCCESS; case TC_ACT_SHOT: return NULL; - }; + } if (!q->filter_list ) { #else @@ -188,13 +189,8 @@ prio_destroy(struct Qdisc* sch) { int prio; struct prio_sched_data *q = qdisc_priv(sch); - struct tcf_proto *tp; - - while ((tp = q->filter_list) != NULL) { - q->filter_list = tp->next; - tcf_destroy(tp); - } + tcf_destroy_chain(q->filter_list); for (prio=0; priobands; prio++) qdisc_destroy(q->queues[prio]); } @@ -271,7 +267,7 @@ static int prio_init(struct Qdisc *sch, static int prio_dump(struct Qdisc *sch, struct sk_buff *skb) { struct prio_sched_data *q = qdisc_priv(sch); - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); struct tc_prio_qopt opt; opt.bands = q->bands; @@ -280,7 +276,7 @@ static int prio_dump(struct Qdisc *sch, return skb->len; rtattr_failure: - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); return -1; } diff -puN net/sched/sch_sfq.c~git-net net/sched/sch_sfq.c --- a/net/sched/sch_sfq.c~git-net +++ a/net/sched/sch_sfq.c @@ -30,6 +30,7 @@ #include #include #include +#include #include #include #include @@ -137,7 +138,7 @@ static unsigned sfq_hash(struct sfq_sche switch (skb->protocol) { case __constant_htons(ETH_P_IP): { - struct iphdr *iph = skb->nh.iph; + const struct iphdr *iph = ip_hdr(skb); h = iph->daddr; h2 = iph->saddr^iph->protocol; if (!(iph->frag_off&htons(IP_MF|IP_OFFSET)) && @@ -152,7 +153,7 @@ static unsigned sfq_hash(struct sfq_sche } case __constant_htons(ETH_P_IPV6): { - struct ipv6hdr *iph = skb->nh.ipv6h; + struct ipv6hdr *iph = ipv6_hdr(skb); h = iph->daddr.s6_addr32[3]; h2 = iph->saddr.s6_addr32[3]^iph->nexthdr; if (iph->nexthdr == IPPROTO_TCP || @@ -461,7 +462,7 @@ static void sfq_destroy(struct Qdisc *sc static int sfq_dump(struct Qdisc *sch, struct sk_buff *skb) { struct sfq_sched_data *q = qdisc_priv(sch); - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); struct tc_sfq_qopt opt; opt.quantum = q->quantum; @@ -476,7 +477,7 @@ static int sfq_dump(struct Qdisc *sch, s return skb->len; rtattr_failure: - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); return -1; } diff -puN net/sched/sch_tbf.c~git-net net/sched/sch_tbf.c --- a/net/sched/sch_tbf.c~git-net +++ a/net/sched/sch_tbf.c @@ -32,6 +32,7 @@ #include #include #include +#include #include #include #include @@ -127,8 +128,8 @@ struct tbf_sched_data long tokens; /* Current number of B tokens */ long ptokens; /* Current number of P tokens */ psched_time_t t_c; /* Time check-point */ - struct timer_list wd_timer; /* Watchdog timer */ struct Qdisc *qdisc; /* Inner qdisc, default - bfifo queue */ + struct qdisc_watchdog watchdog; /* Watchdog timer */ }; #define L2T(q,L) ((q)->R_tab->data[(L)>>(q)->R_tab->rate.cell_log]) @@ -185,14 +186,6 @@ static unsigned int tbf_drop(struct Qdis return len; } -static void tbf_watchdog(unsigned long arg) -{ - struct Qdisc *sch = (struct Qdisc*)arg; - - sch->flags &= ~TCQ_F_THROTTLED; - netif_schedule(sch->dev); -} - static struct sk_buff *tbf_dequeue(struct Qdisc* sch) { struct tbf_sched_data *q = qdisc_priv(sch); @@ -202,13 +195,12 @@ static struct sk_buff *tbf_dequeue(struc if (skb) { psched_time_t now; - long toks, delay; + long toks; long ptoks = 0; unsigned int len = skb->len; - PSCHED_GET_TIME(now); - - toks = PSCHED_TDIFF_SAFE(now, q->t_c, q->buffer); + now = psched_get_time(); + toks = psched_tdiff_bounded(now, q->t_c, q->buffer); if (q->P_tab) { ptoks = toks + q->ptokens; @@ -230,12 +222,8 @@ static struct sk_buff *tbf_dequeue(struc return skb; } - delay = PSCHED_US2JIFFIE(max_t(long, -toks, -ptoks)); - - if (delay == 0) - delay = 1; - - mod_timer(&q->wd_timer, jiffies+delay); + qdisc_watchdog_schedule(&q->watchdog, + now + max_t(long, -toks, -ptoks)); /* Maybe we have a shorter packet in the queue, which can be sent now. It sounds cool, @@ -254,7 +242,6 @@ static struct sk_buff *tbf_dequeue(struc sch->qstats.drops++; } - sch->flags |= TCQ_F_THROTTLED; sch->qstats.overlimits++; } return NULL; @@ -266,11 +253,10 @@ static void tbf_reset(struct Qdisc* sch) qdisc_reset(q->qdisc); sch->q.qlen = 0; - PSCHED_GET_TIME(q->t_c); + q->t_c = psched_get_time(); q->tokens = q->buffer; q->ptokens = q->mtu; - sch->flags &= ~TCQ_F_THROTTLED; - del_timer(&q->wd_timer); + qdisc_watchdog_cancel(&q->watchdog); } static struct Qdisc *tbf_create_dflt_qdisc(struct Qdisc *sch, u32 limit) @@ -377,11 +363,8 @@ static int tbf_init(struct Qdisc* sch, s if (opt == NULL) return -EINVAL; - PSCHED_GET_TIME(q->t_c); - init_timer(&q->wd_timer); - q->wd_timer.function = tbf_watchdog; - q->wd_timer.data = (unsigned long)sch; - + q->t_c = psched_get_time(); + qdisc_watchdog_init(&q->watchdog, sch); q->qdisc = &noop_qdisc; return tbf_change(sch, opt); @@ -391,7 +374,7 @@ static void tbf_destroy(struct Qdisc *sc { struct tbf_sched_data *q = qdisc_priv(sch); - del_timer(&q->wd_timer); + qdisc_watchdog_cancel(&q->watchdog); if (q->P_tab) qdisc_put_rtab(q->P_tab); @@ -404,7 +387,7 @@ static void tbf_destroy(struct Qdisc *sc static int tbf_dump(struct Qdisc *sch, struct sk_buff *skb) { struct tbf_sched_data *q = qdisc_priv(sch); - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); struct rtattr *rta; struct tc_tbf_qopt opt; @@ -420,12 +403,12 @@ static int tbf_dump(struct Qdisc *sch, s opt.mtu = q->mtu; opt.buffer = q->buffer; RTA_PUT(skb, TCA_TBF_PARMS, sizeof(opt), &opt); - rta->rta_len = skb->tail - b; + rta->rta_len = skb_tail_pointer(skb) - b; return skb->len; rtattr_failure: - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); return -1; } diff -puN net/sched/sch_teql.c~git-net net/sched/sch_teql.c --- a/net/sched/sch_teql.c~git-net +++ a/net/sched/sch_teql.c @@ -323,7 +323,7 @@ restart: nores = 1; break; } - __skb_pull(skb, skb->nh.raw - skb->data); + __skb_pull(skb, skb_network_offset(skb)); } while ((q = NEXT_SLAVE(q)) != start); if (nores && skb_res == NULL) { diff -puN net/sctp/associola.c~git-net net/sctp/associola.c --- a/net/sctp/associola.c~git-net +++ a/net/sctp/associola.c @@ -143,7 +143,7 @@ static struct sctp_association *sctp_ass /* Initialize the maximum mumber of new data packets that can be sent * in a burst. */ - asoc->max_burst = sctp_max_burst; + asoc->max_burst = sp->max_burst; /* initialize association timers */ asoc->timeouts[SCTP_EVENT_TIMEOUT_NONE] = 0; @@ -714,8 +714,16 @@ void sctp_assoc_control_transport(struct /* Record the transition on the transport. */ switch (command) { case SCTP_TRANSPORT_UP: + /* If we are moving from UNCONFIRMED state due + * to heartbeat success, report the SCTP_ADDR_CONFIRMED + * state to the user, otherwise report SCTP_ADDR_AVAILABLE. + */ + if (SCTP_UNCONFIRMED == transport->state && + SCTP_HEARTBEAT_SUCCESS == error) + spc_state = SCTP_ADDR_CONFIRMED; + else + spc_state = SCTP_ADDR_AVAILABLE; transport->state = SCTP_ACTIVE; - spc_state = SCTP_ADDR_AVAILABLE; break; case SCTP_TRANSPORT_DOWN: @@ -725,7 +733,7 @@ void sctp_assoc_control_transport(struct default: return; - }; + } /* Generate and send a SCTP_PEER_ADDR_CHANGE notification to the * user. diff -puN net/sctp/debug.c~git-net net/sctp/debug.c --- a/net/sctp/debug.c~git-net +++ a/net/sctp/debug.c @@ -93,8 +93,9 @@ const char *sctp_cname(const sctp_subtyp return "FWD_TSN"; default: - return "unknown chunk"; - }; + break; + } + return "unknown chunk"; } diff -puN net/sctp/input.c~git-net net/sctp/input.c --- a/net/sctp/input.c~git-net +++ a/net/sctp/input.c @@ -79,14 +79,10 @@ static void sctp_add_backlog(struct sock /* Calculate the SCTP checksum of an SCTP packet. */ static inline int sctp_rcv_checksum(struct sk_buff *skb) { - struct sctphdr *sh; - __u32 cmp, val; struct sk_buff *list = skb_shinfo(skb)->frag_list; - - sh = (struct sctphdr *) skb->h.raw; - cmp = ntohl(sh->checksum); - - val = sctp_start_cksum((__u8 *)sh, skb_headlen(skb)); + struct sctphdr *sh = sctp_hdr(skb); + __u32 cmp = ntohl(sh->checksum); + __u32 val = sctp_start_cksum((__u8 *)sh, skb_headlen(skb)); for (; list; list = list->next) val = sctp_update_cksum((__u8 *)list->data, skb_headlen(list), @@ -138,14 +134,13 @@ int sctp_rcv(struct sk_buff *skb) if (skb_linearize(skb)) goto discard_it; - sh = (struct sctphdr *) skb->h.raw; + sh = sctp_hdr(skb); /* Pull up the IP and SCTP headers. */ - __skb_pull(skb, skb->h.raw - skb->data); + __skb_pull(skb, skb_transport_offset(skb)); if (skb->len < sizeof(struct sctphdr)) goto discard_it; - if ((skb->ip_summed != CHECKSUM_UNNECESSARY) && - (sctp_rcv_checksum(skb) < 0)) + if (!skb_csum_unnecessary(skb) && sctp_rcv_checksum(skb) < 0) goto discard_it; skb_pull(skb, sizeof(struct sctphdr)); @@ -154,7 +149,7 @@ int sctp_rcv(struct sk_buff *skb) if (skb->len < sizeof(struct sctp_chunkhdr)) goto discard_it; - family = ipver2af(skb->nh.iph->version); + family = ipver2af(ip_hdr(skb)->version); af = sctp_get_af_specific(family); if (unlikely(!af)) goto discard_it; @@ -510,30 +505,30 @@ void sctp_err_finish(struct sock *sk, st void sctp_v4_err(struct sk_buff *skb, __u32 info) { struct iphdr *iph = (struct iphdr *)skb->data; - struct sctphdr *sh = (struct sctphdr *)(skb->data + (iph->ihl <<2)); - int type = skb->h.icmph->type; - int code = skb->h.icmph->code; + const int ihlen = iph->ihl * 4; + const int type = icmp_hdr(skb)->type; + const int code = icmp_hdr(skb)->code; struct sock *sk; struct sctp_association *asoc = NULL; struct sctp_transport *transport; struct inet_sock *inet; - char *saveip, *savesctp; + sk_buff_data_t saveip, savesctp; int err; - if (skb->len < ((iph->ihl << 2) + 8)) { + if (skb->len < ihlen + 8) { ICMP_INC_STATS_BH(ICMP_MIB_INERRORS); return; } /* Fix up skb to look at the embedded net header. */ - saveip = skb->nh.raw; - savesctp = skb->h.raw; - skb->nh.iph = iph; - skb->h.raw = (char *)sh; - sk = sctp_err_lookup(AF_INET, skb, sh, &asoc, &transport); - /* Put back, the original pointers. */ - skb->nh.raw = saveip; - skb->h.raw = savesctp; + saveip = skb->network_header; + savesctp = skb->transport_header; + skb_reset_network_header(skb); + skb_set_transport_header(skb, ihlen); + sk = sctp_err_lookup(AF_INET, skb, sctp_hdr(skb), &asoc, &transport); + /* Put back, the original values. */ + skb->network_header = saveip; + skb->transport_header = savesctp; if (!sk) { ICMP_INC_STATS_BH(ICMP_MIB_INERRORS); return; @@ -616,7 +611,7 @@ int sctp_rcv_ootb(struct sk_buff *skb) break; ch_end = ((__u8 *)ch) + WORD_ROUND(ntohs(ch->length)); - if (ch_end > skb->tail) + if (ch_end > skb_tail_pointer(skb)) break; /* RFC 8.4, 2) If the OOTB packet contains an ABORT chunk, the @@ -648,7 +643,7 @@ int sctp_rcv_ootb(struct sk_buff *skb) } ch = (sctp_chunkhdr_t *) ch_end; - } while (ch_end < skb->tail); + } while (ch_end < skb_tail_pointer(skb)); return 0; @@ -905,7 +900,7 @@ static struct sctp_association *__sctp_r struct sctp_association *asoc; union sctp_addr addr; union sctp_addr *paddr = &addr; - struct sctphdr *sh = (struct sctphdr *) skb->h.raw; + struct sctphdr *sh = sctp_hdr(skb); sctp_chunkhdr_t *ch; union sctp_params params; sctp_init_chunk_t *init; diff -puN net/sctp/inqueue.c~git-net net/sctp/inqueue.c --- a/net/sctp/inqueue.c~git-net +++ a/net/sctp/inqueue.c @@ -159,16 +159,16 @@ struct sctp_chunk *sctp_inq_pop(struct s * the skb->tail. */ if (unlikely(skb_is_nonlinear(chunk->skb))) { - if (chunk->chunk_end > chunk->skb->tail) - chunk->chunk_end = chunk->skb->tail; + if (chunk->chunk_end > skb_tail_pointer(chunk->skb)) + chunk->chunk_end = skb_tail_pointer(chunk->skb); } skb_pull(chunk->skb, sizeof(sctp_chunkhdr_t)); chunk->subh.v = NULL; /* Subheader is no longer valid. */ - if (chunk->chunk_end < chunk->skb->tail) { + if (chunk->chunk_end < skb_tail_pointer(chunk->skb)) { /* This is not a singleton */ chunk->singleton = 0; - } else if (chunk->chunk_end > chunk->skb->tail) { + } else if (chunk->chunk_end > skb_tail_pointer(chunk->skb)) { /* RFC 2960, Section 6.10 Bundling * * Partial chunks MUST NOT be placed in an SCTP packet. diff -puN net/sctp/ipv6.c~git-net net/sctp/ipv6.c --- a/net/sctp/ipv6.c~git-net +++ a/net/sctp/ipv6.c @@ -122,26 +122,24 @@ SCTP_STATIC void sctp_v6_err(struct sk_b int type, int code, int offset, __be32 info) { struct inet6_dev *idev; - struct ipv6hdr *iph = (struct ipv6hdr *)skb->data; - struct sctphdr *sh = (struct sctphdr *)(skb->data + offset); struct sock *sk; struct sctp_association *asoc; struct sctp_transport *transport; struct ipv6_pinfo *np; - char *saveip, *savesctp; + sk_buff_data_t saveip, savesctp; int err; idev = in6_dev_get(skb->dev); /* Fix up skb to look at the embedded net header. */ - saveip = skb->nh.raw; - savesctp = skb->h.raw; - skb->nh.ipv6h = iph; - skb->h.raw = (char *)sh; - sk = sctp_err_lookup(AF_INET6, skb, sh, &asoc, &transport); + saveip = skb->network_header; + savesctp = skb->transport_header; + skb_reset_network_header(skb); + skb_set_transport_header(skb, offset); + sk = sctp_err_lookup(AF_INET6, skb, sctp_hdr(skb), &asoc, &transport); /* Put back, the original pointers. */ - skb->nh.raw = saveip; - skb->h.raw = savesctp; + skb->network_header = saveip; + skb->transport_header = savesctp; if (!sk) { ICMP6_INC_STATS_BH(idev, ICMP6_MIB_INERRORS); goto out; @@ -391,13 +389,13 @@ static void sctp_v6_from_skb(union sctp_ addr->v6.sin6_flowinfo = 0; /* FIXME */ addr->v6.sin6_scope_id = ((struct inet6_skb_parm *)skb->cb)->iif; - sh = (struct sctphdr *) skb->h.raw; + sh = sctp_hdr(skb); if (is_saddr) { *port = sh->source; - from = &skb->nh.ipv6h->saddr; + from = &ipv6_hdr(skb)->saddr; } else { *port = sh->dest; - from = &skb->nh.ipv6h->daddr; + from = &ipv6_hdr(skb)->daddr; } ipv6_addr_copy(&addr->v6.sin6_addr, from); } @@ -606,7 +604,7 @@ static sctp_scope_t sctp_v6_scope(union default: retval = SCTP_SCOPE_GLOBAL; break; - }; + } return retval; } @@ -699,7 +697,7 @@ static int sctp_v6_skb_iif(const struct /* Was this packet marked by Explicit Congestion Notification? */ static int sctp_v6_is_ce(const struct sk_buff *skb) { - return *((__u32 *)(skb->nh.ipv6h)) & htonl(1<<20); + return *((__u32 *)(ipv6_hdr(skb))) & htonl(1 << 20); } /* Dump the v6 addr to the seq file. */ @@ -766,19 +764,19 @@ static void sctp_inet6_skb_msgname(struc if (msgname) { sctp_inet6_msgname(msgname, addr_len); sin6 = (struct sockaddr_in6 *)msgname; - sh = (struct sctphdr *)skb->h.raw; + sh = sctp_hdr(skb); sin6->sin6_port = sh->source; /* Map ipv4 address into v4-mapped-on-v6 address. */ if (sctp_sk(skb->sk)->v4mapped && - skb->nh.iph->version == 4) { + ip_hdr(skb)->version == 4) { sctp_v4_map_v6((union sctp_addr *)sin6); - sin6->sin6_addr.s6_addr32[3] = skb->nh.iph->saddr; + sin6->sin6_addr.s6_addr32[3] = ip_hdr(skb)->saddr; return; } /* Otherwise, just copy the v6 address. */ - ipv6_addr_copy(&sin6->sin6_addr, &skb->nh.ipv6h->saddr); + ipv6_addr_copy(&sin6->sin6_addr, &ipv6_hdr(skb)->saddr); if (ipv6_addr_type(&sin6->sin6_addr) & IPV6_ADDR_LINKLOCAL) { struct sctp_ulpevent *ev = sctp_skb2event(skb); sin6->sin6_scope_id = ev->iif; diff -puN net/sctp/output.c~git-net net/sctp/output.c --- a/net/sctp/output.c~git-net +++ a/net/sctp/output.c @@ -176,7 +176,7 @@ sctp_xmit_t sctp_packet_transmit_chunk(s case SCTP_XMIT_OK: case SCTP_XMIT_NAGLE_DELAY: break; - }; + } return retval; } diff -puN net/sctp/outqueue.c~git-net net/sctp/outqueue.c --- a/net/sctp/outqueue.c~git-net +++ a/net/sctp/outqueue.c @@ -338,7 +338,7 @@ int sctp_outq_tail(struct sctp_outq *q, SCTP_INC_STATS(SCTP_MIB_OUTORDERCHUNKS); q->empty = 0; break; - }; + } } else { list_add_tail(&chunk->list, &q->control_chunk_list); SCTP_INC_STATS(SCTP_MIB_OUTCTRLCHUNKS); @@ -630,7 +630,7 @@ static int sctp_outq_flush_rtx(struct sc /* Retrieve a new chunk to bundle. */ lchunk = sctp_list_dequeue(lqueue); break; - }; + } /* If we are here due to a retransmit timeout or a fast * retransmit and if there are any chunks left in the retransmit @@ -779,7 +779,7 @@ int sctp_outq_flush(struct sctp_outq *q, default: /* We built a chunk with an illegal type! */ BUG(); - }; + } } /* Is it OK to send data chunks? */ @@ -1397,7 +1397,7 @@ static void sctp_check_transmitted(struc SCTP_DEBUG_PRINTK("ACKed: %08x", tsn); dbg_prt_state = 0; dbg_ack_tsn = tsn; - }; + } dbg_last_ack_tsn = tsn; #endif /* SCTP_DEBUG */ @@ -1452,7 +1452,7 @@ static void sctp_check_transmitted(struc SCTP_DEBUG_PRINTK("KEPT: %08x",tsn); dbg_prt_state = 1; dbg_kept_tsn = tsn; - }; + } dbg_last_kept_tsn = tsn; #endif /* SCTP_DEBUG */ @@ -1476,7 +1476,7 @@ static void sctp_check_transmitted(struc } else { SCTP_DEBUG_PRINTK("\n"); } - }; + } #endif /* SCTP_DEBUG */ if (transport) { if (bytes_acked) { diff -puN net/sctp/protocol.c~git-net net/sctp/protocol.c --- a/net/sctp/protocol.c~git-net +++ a/net/sctp/protocol.c @@ -235,13 +235,13 @@ static void sctp_v4_from_skb(union sctp_ port = &addr->v4.sin_port; addr->v4.sin_family = AF_INET; - sh = (struct sctphdr *) skb->h.raw; + sh = sctp_hdr(skb); if (is_saddr) { *port = sh->source; - from = &skb->nh.iph->saddr; + from = &ip_hdr(skb)->saddr; } else { *port = sh->dest; - from = &skb->nh.iph->daddr; + from = &ip_hdr(skb)->daddr; } memcpy(&addr->v4.sin_addr.s_addr, from, sizeof(struct in_addr)); } @@ -530,7 +530,7 @@ static int sctp_v4_skb_iif(const struct /* Was this packet marked by Explicit Congestion Notification? */ static int sctp_v4_is_ce(const struct sk_buff *skb) { - return INET_ECN_is_ce(skb->nh.iph->tos); + return INET_ECN_is_ce(ip_hdr(skb)->tos); } /* Create and initialize a new sk for the socket returned by accept(). */ @@ -731,15 +731,13 @@ static void sctp_inet_event_msgname(stru /* Initialize and copy out a msgname from an inbound skb. */ static void sctp_inet_skb_msgname(struct sk_buff *skb, char *msgname, int *len) { - struct sctphdr *sh; - struct sockaddr_in *sin; - if (msgname) { + struct sctphdr *sh = sctp_hdr(skb); + struct sockaddr_in *sin = (struct sockaddr_in *)msgname; + sctp_inet_msgname(msgname, len); - sin = (struct sockaddr_in *)msgname; - sh = (struct sctphdr *)skb->h.raw; sin->sin_port = sh->source; - sin->sin_addr.s_addr = skb->nh.iph->saddr; + sin->sin_addr.s_addr = ip_hdr(skb)->saddr; } } @@ -1044,7 +1042,7 @@ SCTP_STATIC __init int sctp_init(void) sctp_cookie_preserve_enable = 1; /* Max.Burst - 4 */ - sctp_max_burst = SCTP_MAX_BURST; + sctp_max_burst = SCTP_DEFAULT_MAX_BURST; /* Association.Max.Retrans - 10 attempts * Path.Max.Retrans - 5 attempts (per destination address) diff -puN net/sctp/sm_make_chunk.c~git-net net/sctp/sm_make_chunk.c --- a/net/sctp/sm_make_chunk.c~git-net +++ a/net/sctp/sm_make_chunk.c @@ -86,7 +86,7 @@ int sctp_chunk_iif(const struct sctp_chu struct sctp_af *af; int iif = 0; - af = sctp_get_af_specific(ipver2af(chunk->skb->nh.iph->version)); + af = sctp_get_af_specific(ipver2af(ip_hdr(chunk->skb)->version)); if (af) iif = af->skb_iif(chunk->skb); @@ -1143,7 +1143,7 @@ void *sctp_addto_chunk(struct sctp_chunk /* Adjust the chunk length field. */ chunk->chunk_hdr->length = htons(chunklen + padlen + len); - chunk->chunk_end = chunk->skb->tail; + chunk->chunk_end = skb_tail_pointer(chunk->skb); return target; } @@ -1168,7 +1168,7 @@ int sctp_user_addto_chunk(struct sctp_ch /* Adjust the chunk length field. */ chunk->chunk_hdr->length = htons(ntohs(chunk->chunk_hdr->length) + len); - chunk->chunk_end = chunk->skb->tail; + chunk->chunk_end = skb_tail_pointer(chunk->skb); out: return err; @@ -1233,7 +1233,7 @@ struct sctp_association *sctp_make_temp_ asoc->temp = 1; skb = chunk->skb; /* Create an entry for the source address of the packet. */ - af = sctp_get_af_specific(ipver2af(skb->nh.iph->version)); + af = sctp_get_af_specific(ipver2af(ip_hdr(skb)->version)); if (unlikely(!af)) goto fail; af->from_skb(&asoc->c.peer_addr, skb, 1); @@ -2077,7 +2077,7 @@ static int sctp_process_param(struct sct default: /* Just ignore anything else. */ break; - }; + } } break; @@ -2118,7 +2118,7 @@ static int sctp_process_param(struct sct SCTP_DEBUG_PRINTK("Ignoring param: %d for association %p.\n", ntohs(param.p->type), asoc); break; - }; + } return retval; } diff -puN net/sctp/sm_sideeffect.c~git-net net/sctp/sm_sideeffect.c --- a/net/sctp/sm_sideeffect.c~git-net +++ a/net/sctp/sm_sideeffect.c @@ -464,7 +464,7 @@ static void sctp_cmd_init_failed(sctp_cm struct sctp_ulpevent *event; event = sctp_ulpevent_make_assoc_change(asoc,0, SCTP_CANT_STR_ASSOC, - (__u16)error, 0, 0, + (__u16)error, 0, 0, NULL, GFP_ATOMIC); if (event) @@ -492,8 +492,13 @@ static void sctp_cmd_assoc_failed(sctp_c /* Cancel any partial delivery in progress. */ sctp_ulpq_abort_pd(&asoc->ulpq, GFP_ATOMIC); - event = sctp_ulpevent_make_assoc_change(asoc, 0, SCTP_COMM_LOST, - (__u16)error, 0, 0, + if (event_type == SCTP_EVENT_T_CHUNK && subtype.chunk == SCTP_CID_ABORT) + event = sctp_ulpevent_make_assoc_change(asoc, 0, SCTP_COMM_LOST, + (__u16)error, 0, 0, chunk, + GFP_ATOMIC); + else + event = sctp_ulpevent_make_assoc_change(asoc, 0, SCTP_COMM_LOST, + (__u16)error, 0, 0, NULL, GFP_ATOMIC); if (event) sctp_add_cmd_sf(commands, SCTP_CMD_EVENT_ULP, @@ -1004,7 +1009,7 @@ static int sctp_side_effects(sctp_event_ status, state, event_type, subtype.chunk); BUG(); break; - }; + } bail: return error; @@ -1484,7 +1489,8 @@ static int sctp_cmd_interpreter(sctp_eve printk(KERN_WARNING "Impossible command: %u, %p\n", cmd->verb, cmd->obj.ptr); break; - }; + } + if (error) break; } diff -puN net/sctp/sm_statefuns.c~git-net net/sctp/sm_statefuns.c --- a/net/sctp/sm_statefuns.c~git-net +++ a/net/sctp/sm_statefuns.c @@ -186,7 +186,7 @@ sctp_disposition_t sctp_sf_do_4_C(const * notification is passed to the upper layer. */ ev = sctp_ulpevent_make_assoc_change(asoc, 0, SCTP_SHUTDOWN_COMP, - 0, 0, 0, GFP_ATOMIC); + 0, 0, 0, NULL, GFP_ATOMIC); if (ev) sctp_add_cmd_sf(commands, SCTP_CMD_EVENT_ULP, SCTP_ULPEVENT(ev)); @@ -629,7 +629,7 @@ sctp_disposition_t sctp_sf_do_5_1D_ce(co case -SCTP_IERROR_BAD_SIG: default: return sctp_sf_pdiscard(ep, asoc, type, arg, commands); - }; + } } @@ -661,7 +661,7 @@ sctp_disposition_t sctp_sf_do_5_1D_ce(co ev = sctp_ulpevent_make_assoc_change(new_asoc, 0, SCTP_COMM_UP, 0, new_asoc->c.sinit_num_ostreams, new_asoc->c.sinit_max_instreams, - GFP_ATOMIC); + NULL, GFP_ATOMIC); if (!ev) goto nomem_ev; @@ -790,7 +790,7 @@ sctp_disposition_t sctp_sf_do_5_1E_ca(co ev = sctp_ulpevent_make_assoc_change(asoc, 0, SCTP_COMM_UP, 0, asoc->c.sinit_num_ostreams, asoc->c.sinit_max_instreams, - GFP_ATOMIC); + NULL, GFP_ATOMIC); if (!ev) goto nomem; @@ -1195,7 +1195,7 @@ static void sctp_tietags_populate(struct new_asoc->c.my_ttag = asoc->c.my_vtag; new_asoc->c.peer_ttag = asoc->c.peer_vtag; break; - }; + } /* Other parameters for the endpoint SHOULD be copied from the * existing parameters of the association (e.g. number of @@ -1625,7 +1625,7 @@ static sctp_disposition_t sctp_sf_do_dup ev = sctp_ulpevent_make_assoc_change(asoc, 0, SCTP_RESTART, 0, new_asoc->c.sinit_num_ostreams, new_asoc->c.sinit_max_instreams, - GFP_ATOMIC); + NULL, GFP_ATOMIC); if (!ev) goto nomem_ev; @@ -1691,7 +1691,7 @@ static sctp_disposition_t sctp_sf_do_dup ev = sctp_ulpevent_make_assoc_change(asoc, 0, SCTP_COMM_UP, 0, new_asoc->c.sinit_num_ostreams, new_asoc->c.sinit_max_instreams, - GFP_ATOMIC); + NULL, GFP_ATOMIC); if (!ev) goto nomem_ev; @@ -1786,7 +1786,7 @@ static sctp_disposition_t sctp_sf_do_dup SCTP_COMM_UP, 0, asoc->c.sinit_num_ostreams, asoc->c.sinit_max_instreams, - GFP_ATOMIC); + NULL, GFP_ATOMIC); if (!ev) goto nomem; @@ -1904,7 +1904,7 @@ sctp_disposition_t sctp_sf_do_5_2_4_dupc case -SCTP_IERROR_BAD_SIG: default: return sctp_sf_pdiscard(ep, asoc, type, arg, commands); - }; + } } /* Compare the tie_tag in cookie with the verification tag of @@ -1936,7 +1936,7 @@ sctp_disposition_t sctp_sf_do_5_2_4_dupc default: /* Discard packet for all others. */ retval = sctp_sf_pdiscard(ep, asoc, type, arg, commands); break; - }; + } /* Delete the tempory new association. */ sctp_add_cmd_sf(commands, SCTP_CMD_NEW_ASOC, SCTP_ASOC(new_asoc)); @@ -3035,7 +3035,7 @@ sctp_disposition_t sctp_sf_do_9_2_final( * notification is passed to the upper layer. */ ev = sctp_ulpevent_make_assoc_change(asoc, 0, SCTP_SHUTDOWN_COMP, - 0, 0, 0, GFP_ATOMIC); + 0, 0, 0, NULL, GFP_ATOMIC); if (!ev) goto nomem; @@ -3115,7 +3115,7 @@ sctp_disposition_t sctp_sf_ootb(const st break; ch_end = ((__u8 *)ch) + WORD_ROUND(ntohs(ch->length)); - if (ch_end > skb->tail) + if (ch_end > skb_tail_pointer(skb)) break; if (SCTP_CID_SHUTDOWN_ACK == ch->type) @@ -3130,7 +3130,7 @@ sctp_disposition_t sctp_sf_ootb(const st return sctp_sf_pdiscard(ep, asoc, type, arg, commands); ch = (sctp_chunkhdr_t *) ch_end; - } while (ch_end < skb->tail); + } while (ch_end < skb_tail_pointer(skb)); if (ootb_shut_ack) sctp_sf_shut_8_4_5(ep, asoc, type, arg, commands); @@ -4816,7 +4816,7 @@ sctp_disposition_t sctp_sf_t2_timer_expi default: BUG(); break; - }; + } if (!reply) goto nomem; @@ -5286,7 +5286,7 @@ static int sctp_eat_data(const struct sc chunk->ecn_ce_done = 1; af = sctp_get_af_specific( - ipver2af(chunk->skb->nh.iph->version)); + ipver2af(ip_hdr(chunk->skb)->version)); if (af && af->is_ce(chunk->skb) && asoc->peer.ecn_capable) { /* Do real work as sideffect. */ diff -puN net/sctp/sm_statetable.c~git-net net/sctp/sm_statetable.c --- a/net/sctp/sm_statetable.c~git-net +++ a/net/sctp/sm_statetable.c @@ -101,7 +101,7 @@ const sctp_sm_table_entry_t *sctp_sm_loo default: /* Yikes! We got an illegal event type. */ return &bug; - }; + } } #define TYPE_SCTP_FUNC(func) {.fn = func, .name = #func} diff -puN net/sctp/socket.c~git-net net/sctp/socket.c --- a/net/sctp/socket.c~git-net +++ a/net/sctp/socket.c @@ -941,7 +941,7 @@ SCTP_STATIC int sctp_setsockopt_bindx(st default: err = -EINVAL; break; - }; + } out: kfree(kaddrs); @@ -2039,6 +2039,10 @@ static int sctp_setsockopt_autoclose(str * SPP_HB_DEMAND - Request a user initiated heartbeat * to be made immediately. * + * SPP_HB_TIME_IS_ZERO - Specify's that the time for + * heartbeat delayis to be set to the value of 0 + * milliseconds. + * * SPP_PMTUD_ENABLE - This field will enable PMTU * discovery upon the specified address. Note that * if the address feild is empty then all addresses @@ -2081,13 +2085,30 @@ static int sctp_apply_peer_addr_params(s return error; } - if (params->spp_hbinterval) { - if (trans) { - trans->hbinterval = msecs_to_jiffies(params->spp_hbinterval); - } else if (asoc) { - asoc->hbinterval = msecs_to_jiffies(params->spp_hbinterval); - } else { - sp->hbinterval = params->spp_hbinterval; + /* Note that unless the spp_flag is set to SPP_HB_ENABLE the value of + * this field is ignored. Note also that a value of zero indicates + * the current setting should be left unchanged. + */ + if (params->spp_flags & SPP_HB_ENABLE) { + + /* Re-zero the interval if the SPP_HB_TIME_IS_ZERO is + * set. This lets us use 0 value when this flag + * is set. + */ + if (params->spp_flags & SPP_HB_TIME_IS_ZERO) + params->spp_hbinterval = 0; + + if (params->spp_hbinterval || + (params->spp_flags & SPP_HB_TIME_IS_ZERO)) { + if (trans) { + trans->hbinterval = + msecs_to_jiffies(params->spp_hbinterval); + } else if (asoc) { + asoc->hbinterval = + msecs_to_jiffies(params->spp_hbinterval); + } else { + sp->hbinterval = params->spp_hbinterval; + } } } @@ -2104,7 +2125,12 @@ static int sctp_apply_peer_addr_params(s } } - if (params->spp_pathmtu) { + /* When Path MTU discovery is disabled the value specified here will + * be the "fixed" path mtu (i.e. the value of the spp_flags field must + * include the flag SPP_PMTUD_DISABLE for this field to have any + * effect). + */ + if ((params->spp_flags & SPP_PMTUD_DISABLE) && params->spp_pathmtu) { if (trans) { trans->pathmtu = params->spp_pathmtu; sctp_assoc_sync_pmtu(asoc); @@ -2135,7 +2161,11 @@ static int sctp_apply_peer_addr_params(s } } - if (params->spp_sackdelay) { + /* Note that unless the spp_flag is set to SPP_SACKDELAY_ENABLE the + * value of this field is ignored. Note also that a value of zero + * indicates the current setting should be left unchanged. + */ + if ((params->spp_flags & SPP_SACKDELAY_ENABLE) && params->spp_sackdelay) { if (trans) { trans->sackdelay = msecs_to_jiffies(params->spp_sackdelay); @@ -2163,7 +2193,11 @@ static int sctp_apply_peer_addr_params(s } } - if (params->spp_pathmaxrxt) { + /* Note that unless the spp_flag is set to SPP_PMTUD_ENABLE the value + * of this field is ignored. Note also that a value of zero + * indicates the current setting should be left unchanged. + */ + if ((params->spp_flags & SPP_PMTUD_ENABLE) && params->spp_pathmaxrxt) { if (trans) { trans->pathmaxrxt = params->spp_pathmaxrxt; } else if (asoc) { @@ -2255,7 +2289,7 @@ static int sctp_setsockopt_peer_addr_par return 0; } -/* 7.1.24. Delayed Ack Timer (SCTP_DELAYED_ACK_TIME) +/* 7.1.23. Delayed Ack Timer (SCTP_DELAYED_ACK_TIME) * * This options will get or set the delayed ack timer. The time is set * in milliseconds. If the assoc_id is 0, then this sets or gets the @@ -2792,6 +2826,102 @@ static int sctp_setsockopt_context(struc return 0; } +/* + * 7.1.24. Get or set fragmented interleave (SCTP_FRAGMENT_INTERLEAVE) + * + * This options will at a minimum specify if the implementation is doing + * fragmented interleave. Fragmented interleave, for a one to many + * socket, is when subsequent calls to receive a message may return + * parts of messages from different associations. Some implementations + * may allow you to turn this value on or off. If so, when turned off, + * no fragment interleave will occur (which will cause a head of line + * blocking amongst multiple associations sharing the same one to many + * socket). When this option is turned on, then each receive call may + * come from a different association (thus the user must receive data + * with the extended calls (e.g. sctp_recvmsg) to keep track of which + * association each receive belongs to. + * + * This option takes a boolean value. A non-zero value indicates that + * fragmented interleave is on. A value of zero indicates that + * fragmented interleave is off. + * + * Note that it is important that an implementation that allows this + * option to be turned on, have it off by default. Otherwise an unaware + * application using the one to many model may become confused and act + * incorrectly. + */ +static int sctp_setsockopt_fragment_interleave(struct sock *sk, + char __user *optval, + int optlen) +{ + int val; + + if (optlen != sizeof(int)) + return -EINVAL; + if (get_user(val, (int __user *)optval)) + return -EFAULT; + + sctp_sk(sk)->frag_interleave = (val == 0) ? 0 : 1; + + return 0; +} + +/* + * 7.1.25. Set or Get the sctp partial delivery point + * (SCTP_PARTIAL_DELIVERY_POINT) + * This option will set or get the SCTP partial delivery point. This + * point is the size of a message where the partial delivery API will be + * invoked to help free up rwnd space for the peer. Setting this to a + * lower value will cause partial delivery's to happen more often. The + * calls argument is an integer that sets or gets the partial delivery + * point. + */ +static int sctp_setsockopt_partial_delivery_point(struct sock *sk, + char __user *optval, + int optlen) +{ + u32 val; + + if (optlen != sizeof(u32)) + return -EINVAL; + if (get_user(val, (int __user *)optval)) + return -EFAULT; + + sctp_sk(sk)->pd_point = val; + + return 0; /* is this the right error code? */ +} + +/* + * 7.1.28. Set or Get the maximum burst (SCTP_MAX_BURST) + * + * This option will allow a user to change the maximum burst of packets + * that can be emitted by this association. Note that the default value + * is 4, and some implementations may restrict this setting so that it + * can only be lowered. + * + * NOTE: This text doesn't seem right. Do this on a socket basis with + * future associations inheriting the socket value. + */ +static int sctp_setsockopt_maxburst(struct sock *sk, + char __user *optval, + int optlen) +{ + int val; + + if (optlen != sizeof(int)) + return -EINVAL; + if (get_user(val, (int __user *)optval)) + return -EFAULT; + + if (val < 0) + return -EINVAL; + + sctp_sk(sk)->max_burst = val; + + return 0; +} + /* API 6.2 setsockopt(), getsockopt() * * Applications use setsockopt() and getsockopt() to set or retrieve @@ -2871,6 +3001,9 @@ SCTP_STATIC int sctp_setsockopt(struct s case SCTP_DELAYED_ACK_TIME: retval = sctp_setsockopt_delayed_ack_time(sk, optval, optlen); break; + case SCTP_PARTIAL_DELIVERY_POINT: + retval = sctp_setsockopt_partial_delivery_point(sk, optval, optlen); + break; case SCTP_INITMSG: retval = sctp_setsockopt_initmsg(sk, optval, optlen); @@ -2906,11 +3039,16 @@ SCTP_STATIC int sctp_setsockopt(struct s case SCTP_CONTEXT: retval = sctp_setsockopt_context(sk, optval, optlen); break; - + case SCTP_FRAGMENT_INTERLEAVE: + retval = sctp_setsockopt_fragment_interleave(sk, optval, optlen); + break; + case SCTP_MAX_BURST: + retval = sctp_setsockopt_maxburst(sk, optval, optlen); + break; default: retval = -ENOPROTOOPT; break; - }; + } sctp_release_sock(sk); @@ -3066,6 +3204,7 @@ SCTP_STATIC int sctp_init_sock(struct so sp->default_timetolive = 0; sp->default_rcv_context = 0; + sp->max_burst = sctp_max_burst; /* Initialize default setup parameters. These parameters * can be modified with the SCTP_INITMSG socket option or @@ -3134,8 +3273,9 @@ SCTP_STATIC int sctp_init_sock(struct so sp->pf = sctp_get_pf_specific(sk->sk_family); /* Control variables for partial data delivery. */ - sp->pd_mode = 0; + atomic_set(&sp->pd_mode, 0); skb_queue_head_init(&sp->pd_lobby); + sp->frag_interleave = 0; /* Create a per socket endpoint structure. Even if we * change the data structure relationships, this may still @@ -3642,7 +3782,7 @@ static int sctp_getsockopt_peer_addr_par return 0; } -/* 7.1.24. Delayed Ack Timer (SCTP_DELAYED_ACK_TIME) +/* 7.1.23. Delayed Ack Timer (SCTP_DELAYED_ACK_TIME) * * This options will get or set the delayed ack timer. The time is set * in milliseconds. If the assoc_id is 0, then this sets or gets the @@ -4560,6 +4700,77 @@ static int sctp_getsockopt_maxseg(struct return 0; } +/* + * 7.1.24. Get or set fragmented interleave (SCTP_FRAGMENT_INTERLEAVE) + * (chapter and verse is quoted at sctp_setsockopt_fragment_interleave()) + */ +static int sctp_getsockopt_fragment_interleave(struct sock *sk, int len, + char __user *optval, int __user *optlen) +{ + int val; + + if (len < sizeof(int)) + return -EINVAL; + + len = sizeof(int); + + val = sctp_sk(sk)->frag_interleave; + if (put_user(len, optlen)) + return -EFAULT; + if (copy_to_user(optval, &val, len)) + return -EFAULT; + + return 0; +} + +/* + * 7.1.25. Set or Get the sctp partial delivery point + * (chapter and verse is quoted at sctp_setsockopt_partial_delivery_point()) + */ +static int sctp_getsockopt_partial_delivery_point(struct sock *sk, int len, + char __user *optval, + int __user *optlen) +{ + u32 val; + + if (len < sizeof(u32)) + return -EINVAL; + + len = sizeof(u32); + + val = sctp_sk(sk)->pd_point; + if (put_user(len, optlen)) + return -EFAULT; + if (copy_to_user(optval, &val, len)) + return -EFAULT; + + return -ENOTSUPP; +} + +/* + * 7.1.28. Set or Get the maximum burst (SCTP_MAX_BURST) + * (chapter and verse is quoted at sctp_setsockopt_maxburst()) + */ +static int sctp_getsockopt_maxburst(struct sock *sk, int len, + char __user *optval, + int __user *optlen) +{ + int val; + + if (len < sizeof(int)) + return -EINVAL; + + len = sizeof(int); + + val = sctp_sk(sk)->max_burst; + if (put_user(len, optlen)) + return -EFAULT; + if (copy_to_user(optval, &val, len)) + return -EFAULT; + + return -ENOTSUPP; +} + SCTP_STATIC int sctp_getsockopt(struct sock *sk, int level, int optname, char __user *optval, int __user *optlen) { @@ -4672,10 +4883,21 @@ SCTP_STATIC int sctp_getsockopt(struct s case SCTP_CONTEXT: retval = sctp_getsockopt_context(sk, len, optval, optlen); break; + case SCTP_FRAGMENT_INTERLEAVE: + retval = sctp_getsockopt_fragment_interleave(sk, len, optval, + optlen); + break; + case SCTP_PARTIAL_DELIVERY_POINT: + retval = sctp_getsockopt_partial_delivery_point(sk, len, optval, + optlen); + break; + case SCTP_MAX_BURST: + retval = sctp_getsockopt_maxburst(sk, len, optval, optlen); + break; default: retval = -ENOPROTOOPT; break; - }; + } sctp_release_sock(sk); return retval; @@ -5000,7 +5222,8 @@ int sctp_inet_listen(struct socket *sock break; default: break; - }; + } + if (err) goto cleanup; @@ -5263,7 +5486,7 @@ SCTP_STATIC int sctp_msghdr_parse(const default: return -EINVAL; - }; + } } return 0; } @@ -5766,9 +5989,9 @@ static void sctp_sock_migrate(struct soc * 3) Peeling off non-partial delivery; move pd_lobby to receive_queue. */ skb_queue_head_init(&newsp->pd_lobby); - sctp_sk(newsk)->pd_mode = assoc->ulpq.pd_mode; + atomic_set(&sctp_sk(newsk)->pd_mode, assoc->ulpq.pd_mode); - if (sctp_sk(oldsk)->pd_mode) { + if (atomic_read(&sctp_sk(oldsk)->pd_mode)) { struct sk_buff_head *queue; /* Decide which queue to move pd_lobby skbs to. */ @@ -5794,7 +6017,7 @@ static void sctp_sock_migrate(struct soc * delivery to finish. */ if (assoc->ulpq.pd_mode) - sctp_clear_pd(oldsk); + sctp_clear_pd(oldsk, NULL); } diff -puN net/sctp/transport.c~git-net net/sctp/transport.c --- a/net/sctp/transport.c~git-net +++ a/net/sctp/transport.c @@ -507,7 +507,7 @@ void sctp_transport_lower_cwnd(struct sc transport->cwnd = max(transport->cwnd/2, 4*transport->asoc->pathmtu); break; - }; + } transport->partial_bytes_acked = 0; SCTP_DEBUG_PRINTK("%s: transport: %p reason: %d cwnd: " diff -puN net/sctp/ulpevent.c~git-net net/sctp/ulpevent.c --- a/net/sctp/ulpevent.c~git-net +++ a/net/sctp/ulpevent.c @@ -131,19 +131,54 @@ static inline void sctp_ulpevent_release struct sctp_ulpevent *sctp_ulpevent_make_assoc_change( const struct sctp_association *asoc, __u16 flags, __u16 state, __u16 error, __u16 outbound, - __u16 inbound, gfp_t gfp) + __u16 inbound, struct sctp_chunk *chunk, gfp_t gfp) { struct sctp_ulpevent *event; struct sctp_assoc_change *sac; struct sk_buff *skb; - event = sctp_ulpevent_new(sizeof(struct sctp_assoc_change), + /* If the lower layer passed in the chunk, it will be + * an ABORT, so we need to include it in the sac_info. + */ + if (chunk) { + /* sctp_inqu_pop() has allready pulled off the chunk + * header. We need to put it back temporarily + */ + skb_push(chunk->skb, sizeof(sctp_chunkhdr_t)); + + /* Copy the chunk data to a new skb and reserve enough + * head room to use as notification. + */ + skb = skb_copy_expand(chunk->skb, + sizeof(struct sctp_assoc_change), 0, gfp); + + if (!skb) + goto fail; + + /* put back the chunk header now that we have a copy */ + skb_pull(chunk->skb, sizeof(sctp_chunkhdr_t)); + + /* Embed the event fields inside the cloned skb. */ + event = sctp_skb2event(skb); + sctp_ulpevent_init(event, MSG_NOTIFICATION, skb->truesize); + + /* Include the notification structure */ + sac = (struct sctp_assoc_change *) + skb_push(skb, sizeof(struct sctp_assoc_change)); + + /* Trim the buffer to the right length. */ + skb_trim(skb, sizeof(struct sctp_assoc_change) + + ntohs(chunk->chunk_hdr->length)); + } else { + event = sctp_ulpevent_new(sizeof(struct sctp_assoc_change), MSG_NOTIFICATION, gfp); - if (!event) - goto fail; - skb = sctp_event2skb(event); - sac = (struct sctp_assoc_change *) - skb_put(skb, sizeof(struct sctp_assoc_change)); + if (!event) + goto fail; + + skb = sctp_event2skb(event); + sac = (struct sctp_assoc_change *) skb_put(skb, + sizeof(struct sctp_assoc_change)); + } /* Socket Extensions for SCTP * 5.3.1.1 SCTP_ASSOC_CHANGE diff -puN net/sctp/ulpqueue.c~git-net net/sctp/ulpqueue.c --- a/net/sctp/ulpqueue.c~git-net +++ a/net/sctp/ulpqueue.c @@ -138,26 +138,59 @@ int sctp_ulpq_tail_data(struct sctp_ulpq /* Clear the partial delivery mode for this socket. Note: This * assumes that no association is currently in partial delivery mode. */ -int sctp_clear_pd(struct sock *sk) +int sctp_clear_pd(struct sock *sk, struct sctp_association *asoc) { struct sctp_sock *sp = sctp_sk(sk); - sp->pd_mode = 0; - if (!skb_queue_empty(&sp->pd_lobby)) { - struct list_head *list; - sctp_skb_list_tail(&sp->pd_lobby, &sk->sk_receive_queue); - list = (struct list_head *)&sctp_sk(sk)->pd_lobby; - INIT_LIST_HEAD(list); - return 1; + if (atomic_dec_and_test(&sp->pd_mode)) { + /* This means there are no other associations in PD, so + * we can go ahead and clear out the lobby in one shot + */ + if (!skb_queue_empty(&sp->pd_lobby)) { + struct list_head *list; + sctp_skb_list_tail(&sp->pd_lobby, &sk->sk_receive_queue); + list = (struct list_head *)&sctp_sk(sk)->pd_lobby; + INIT_LIST_HEAD(list); + return 1; + } + } else { + /* There are other associations in PD, so we only need to + * pull stuff out of the lobby that belongs to the + * associations that is exiting PD (all of its notifications + * are posted here). + */ + if (!skb_queue_empty(&sp->pd_lobby) && asoc) { + struct sk_buff *skb, *tmp; + struct sctp_ulpevent *event; + + sctp_skb_for_each(skb, &sp->pd_lobby, tmp) { + event = sctp_skb2event(skb); + if (event->asoc == asoc) { + __skb_unlink(skb, &sp->pd_lobby); + __skb_queue_tail(&sk->sk_receive_queue, + skb); + } + } + } } + return 0; } +/* Set the pd_mode on the socket and ulpq */ +static void sctp_ulpq_set_pd(struct sctp_ulpq *ulpq) +{ + struct sctp_sock *sp = sctp_sk(ulpq->asoc->base.sk); + + atomic_inc(&sp->pd_mode); + ulpq->pd_mode = 1; +} + /* Clear the pd_mode and restart any pending messages waiting for delivery. */ static int sctp_ulpq_clear_pd(struct sctp_ulpq *ulpq) { ulpq->pd_mode = 0; - return sctp_clear_pd(ulpq->asoc->base.sk); + return sctp_clear_pd(ulpq->asoc->base.sk, ulpq->asoc); } /* If the SKB of 'event' is on a list, it is the first such member @@ -187,25 +220,35 @@ int sctp_ulpq_tail_event(struct sctp_ulp * the association the cause of the partial delivery. */ - if (!sctp_sk(sk)->pd_mode) { + if (atomic_read(&sctp_sk(sk)->pd_mode) == 0) { queue = &sk->sk_receive_queue; - } else if (ulpq->pd_mode) { - /* If the association is in partial delivery, we - * need to finish delivering the partially processed - * packet before passing any other data. This is - * because we don't truly support stream interleaving. - */ - if ((event->msg_flags & MSG_NOTIFICATION) || - (SCTP_DATA_NOT_FRAG == - (event->msg_flags & SCTP_DATA_FRAG_MASK))) - queue = &sctp_sk(sk)->pd_lobby; - else { - clear_pd = event->msg_flags & MSG_EOR; - queue = &sk->sk_receive_queue; + } else { + if (ulpq->pd_mode) { + /* If the association is in partial delivery, we + * need to finish delivering the partially processed + * packet before passing any other data. This is + * because we don't truly support stream interleaving. + */ + if ((event->msg_flags & MSG_NOTIFICATION) || + (SCTP_DATA_NOT_FRAG == + (event->msg_flags & SCTP_DATA_FRAG_MASK))) + queue = &sctp_sk(sk)->pd_lobby; + else { + clear_pd = event->msg_flags & MSG_EOR; + queue = &sk->sk_receive_queue; + } + } else { + /* + * If fragment interleave is enabled, we + * can queue this to the recieve queue instead + * of the lobby. + */ + if (sctp_sk(sk)->frag_interleave) + queue = &sk->sk_receive_queue; + else + queue = &sctp_sk(sk)->pd_lobby; } - } else - queue = &sctp_sk(sk)->pd_lobby; - + } /* If we are harvesting multiple skbs they will be * collected on a list. @@ -348,7 +391,7 @@ static struct sctp_ulpevent *sctp_make_r break; pos->next = pnext; pos = pnext; - }; + } event = sctp_skb2event(f_frag); SCTP_INC_STATS(SCTP_MIB_REASMUSRMSGS); @@ -367,6 +410,11 @@ static inline struct sctp_ulpevent *sctp struct sk_buff *first_frag = NULL; __u32 ctsn, next_tsn; struct sctp_ulpevent *retval = NULL; + struct sk_buff *pd_first = NULL; + struct sk_buff *pd_last = NULL; + size_t pd_len = 0; + struct sctp_association *asoc; + u32 pd_point; /* Initialized to 0 just to avoid compiler warning message. Will * never be used with this value. It is referenced only after it @@ -382,6 +430,10 @@ static inline struct sctp_ulpevent *sctp * we expect to find the remaining middle fragments and the last * fragment in order. If not, first_frag is reset to NULL and we * start the next pass when we find another first fragment. + * + * There is a potential to do partial delivery if user sets + * SCTP_PARTIAL_DELIVERY_POINT option. Lets count some things here + * to see if can do PD. */ skb_queue_walk(&ulpq->reasm, pos) { cevent = sctp_skb2event(pos); @@ -389,14 +441,32 @@ static inline struct sctp_ulpevent *sctp switch (cevent->msg_flags & SCTP_DATA_FRAG_MASK) { case SCTP_DATA_FIRST_FRAG: + /* If this "FIRST_FRAG" is the first + * element in the queue, then count it towards + * possible PD. + */ + if (pos == ulpq->reasm.next) { + pd_first = pos; + pd_last = pos; + pd_len = pos->len; + } else { + pd_first = NULL; + pd_last = NULL; + pd_len = 0; + } + first_frag = pos; next_tsn = ctsn + 1; break; case SCTP_DATA_MIDDLE_FRAG: - if ((first_frag) && (ctsn == next_tsn)) + if ((first_frag) && (ctsn == next_tsn)) { next_tsn++; - else + if (pd_first) { + pd_last = pos; + pd_len += pos->len; + } + } else first_frag = NULL; break; @@ -406,8 +476,29 @@ static inline struct sctp_ulpevent *sctp else first_frag = NULL; break; - }; + } + } + asoc = ulpq->asoc; + if (pd_first) { + /* Make sure we can enter partial deliver. + * We can trigger partial delivery only if framgent + * interleave is set, or the socket is not already + * in partial delivery. + */ + if (!sctp_sk(asoc->base.sk)->frag_interleave && + atomic_read(&sctp_sk(asoc->base.sk)->pd_mode)) + goto done; + + cevent = sctp_skb2event(pd_first); + pd_point = sctp_sk(asoc->base.sk)->pd_point; + if (pd_point && pd_point <= pd_len) { + retval = sctp_make_reassembled_event(&ulpq->reasm, + pd_first, + pd_last); + if (retval) + sctp_ulpq_set_pd(ulpq); + } } done: return retval; @@ -465,7 +556,7 @@ static inline struct sctp_ulpevent *sctp goto done; default: return NULL; - }; + } } /* We have the reassembled event. There is no need to look @@ -557,7 +648,7 @@ static inline struct sctp_ulpevent *sctp break; default: return NULL; - }; + } } /* We have the reassembled event. There is no need to look @@ -826,19 +917,29 @@ void sctp_ulpq_partial_delivery(struct s { struct sctp_ulpevent *event; struct sctp_association *asoc; + struct sctp_sock *sp; asoc = ulpq->asoc; + sp = sctp_sk(asoc->base.sk); - /* Are we already in partial delivery mode? */ - if (!sctp_sk(asoc->base.sk)->pd_mode) { + /* If the association is already in Partial Delivery mode + * we have noting to do. + */ + if (ulpq->pd_mode) + return; + /* If the user enabled fragment interleave socket option, + * multiple associations can enter partial delivery. + * Otherwise, we can only enter partial delivery if the + * socket is not in partial deliver mode. + */ + if (sp->frag_interleave || atomic_read(&sp->pd_mode) == 0) { /* Is partial delivery possible? */ event = sctp_ulpq_retrieve_first(ulpq); /* Send event to the ULP. */ if (event) { sctp_ulpq_tail_event(ulpq, event); - sctp_sk(asoc->base.sk)->pd_mode = 1; - ulpq->pd_mode = 1; + sctp_ulpq_set_pd(ulpq); return; } } diff -puN net/socket.c~git-net net/socket.c --- a/net/socket.c~git-net +++ a/net/socket.c @@ -585,6 +585,37 @@ int kernel_sendmsg(struct socket *sock, return result; } +/* + * called from sock_recv_timestamp() if sock_flag(sk, SOCK_RCVTSTAMP) + */ +void __sock_recv_timestamp(struct msghdr *msg, struct sock *sk, + struct sk_buff *skb) +{ + ktime_t kt = skb->tstamp; + + if (!sock_flag(sk, SOCK_RCVTSTAMPNS)) { + struct timeval tv; + /* Race occurred between timestamp enabling and packet + receiving. Fill in the current time for now. */ + if (kt.tv64 == 0) + kt = ktime_get_real(); + skb->tstamp = kt; + tv = ktime_to_timeval(kt); + put_cmsg(msg, SOL_SOCKET, SCM_TIMESTAMP, sizeof(tv), &tv); + } else { + struct timespec ts; + /* Race occurred between timestamp enabling and packet + receiving. Fill in the current time for now. */ + if (kt.tv64 == 0) + kt = ktime_get_real(); + skb->tstamp = kt; + ts = ktime_to_timespec(kt); + put_cmsg(msg, SOL_SOCKET, SCM_TIMESTAMPNS, sizeof(ts), &ts); + } +} + +EXPORT_SYMBOL_GPL(__sock_recv_timestamp); + static inline int __sock_recvmsg(struct kiocb *iocb, struct socket *sock, struct msghdr *msg, size_t size, int flags) { @@ -1292,7 +1323,7 @@ asmlinkage long sys_bind(int fd, struct int err, fput_needed; sock = sockfd_lookup_light(fd, &err, &fput_needed); - if(sock) { + if (sock) { err = move_addr_to_kernel(umyaddr, addrlen, address); if (err >= 0) { err = security_socket_bind(sock, diff -puN net/sunrpc/socklib.c~git-net net/sunrpc/socklib.c --- a/net/sunrpc/socklib.c~git-net +++ a/net/sunrpc/socklib.c @@ -154,7 +154,7 @@ int csum_partial_copy_to_xdr(struct xdr_ desc.offset = sizeof(struct udphdr); desc.count = skb->len - desc.offset; - if (skb->ip_summed == CHECKSUM_UNNECESSARY) + if (skb_csum_unnecessary(skb)) goto no_checksum; desc.csum = csum_partial(skb->data, desc.offset, skb->csum); diff -puN net/sunrpc/svcsock.c~git-net net/sunrpc/svcsock.c --- a/net/sunrpc/svcsock.c~git-net +++ a/net/sunrpc/svcsock.c @@ -798,16 +798,12 @@ svc_udp_recvfrom(struct svc_rqst *rqstp) dprintk("svc: recvfrom returned error %d\n", -err); } rqstp->rq_addrlen = sizeof(rqstp->rq_addr); - if (skb->tstamp.off_sec == 0) { - struct timeval tv; - - tv.tv_sec = xtime.tv_sec; - tv.tv_usec = xtime.tv_nsec / NSEC_PER_USEC; - skb_set_timestamp(skb, &tv); + if (skb->tstamp.tv64 == 0) { + skb->tstamp = ktime_get_real(); /* Don't enable netstamp, sunrpc doesn't need that much accuracy */ } - skb_get_timestamp(skb, &svsk->sk_sk->sk_stamp); + svsk->sk_sk->sk_stamp = skb->tstamp; set_bit(SK_DATA, &svsk->sk_flags); /* there may be more data... */ /* diff -puN net/tipc/config.c~git-net net/tipc/config.c --- a/net/tipc/config.c~git-net +++ a/net/tipc/config.c @@ -89,7 +89,7 @@ struct sk_buff *tipc_cfg_reply_alloc(int int tipc_cfg_append_tlv(struct sk_buff *buf, int tlv_type, void *tlv_data, int tlv_data_size) { - struct tlv_desc *tlv = (struct tlv_desc *)buf->tail; + struct tlv_desc *tlv = (struct tlv_desc *)skb_tail_pointer(buf); int new_tlv_space = TLV_SPACE(tlv_data_size); if (skb_tailroom(buf) < new_tlv_space) { diff -puN net/tipc/eth_media.c~git-net net/tipc/eth_media.c --- a/net/tipc/eth_media.c~git-net +++ a/net/tipc/eth_media.c @@ -73,7 +73,7 @@ static int send_msg(struct sk_buff *buf, clone = skb_clone(buf, GFP_ATOMIC); if (clone) { - clone->nh.raw = clone->data; + skb_reset_network_header(clone); dev = ((struct eth_bearer *)(tb_ptr->usr_handle))->dev; clone->dev = dev; dev->hard_header(clone, dev, ETH_P_TIPC, @@ -99,8 +99,8 @@ static int recv_msg(struct sk_buff *buf, if (likely(eb_ptr->bearer)) { if (likely(!dev->promiscuity) || - !memcmp(buf->mac.raw,dev->dev_addr,ETH_ALEN) || - !memcmp(buf->mac.raw,dev->broadcast,ETH_ALEN)) { + !memcmp(skb_mac_header(buf), dev->dev_addr, ETH_ALEN) || + !memcmp(skb_mac_header(buf), dev->broadcast, ETH_ALEN)) { size = msg_size((struct tipc_msg *)buf->data); skb_trim(buf, size); if (likely(buf->len == size)) { @@ -140,7 +140,7 @@ static int enable_bearer(struct tipc_bea return -EDQUOT; if (!eb_ptr->dev) { eb_ptr->dev = dev; - eb_ptr->tipc_packet_type.type = __constant_htons(ETH_P_TIPC); + eb_ptr->tipc_packet_type.type = htons(ETH_P_TIPC); eb_ptr->tipc_packet_type.dev = dev; eb_ptr->tipc_packet_type.func = recv_msg; eb_ptr->tipc_packet_type.af_packet_priv = eb_ptr; diff -puN net/tipc/link.c~git-net net/tipc/link.c --- a/net/tipc/link.c~git-net +++ a/net/tipc/link.c @@ -1001,7 +1001,7 @@ static int link_bundle_buf(struct link * return 0; skb_put(bundler, pad + size); - memcpy(bundler->data + to_pos, buf->data, size); + skb_copy_to_linear_data_offset(bundler, to_pos, buf->data, size); msg_set_size(bundler_msg, to_pos + size); msg_set_msgcnt(bundler_msg, msg_msgcnt(bundler_msg) + 1); dbg("Packed msg # %u(%u octets) into pos %u in buf(#%u)\n", @@ -1109,8 +1109,8 @@ int tipc_link_send_buf(struct link *l_pt if (bundler) { msg_init(&bundler_hdr, MSG_BUNDLER, OPEN_MSG, TIPC_OK, INT_H_SIZE, l_ptr->addr); - memcpy(bundler->data, (unchar *)&bundler_hdr, - INT_H_SIZE); + skb_copy_to_linear_data(bundler, &bundler_hdr, + INT_H_SIZE); skb_trim(bundler, INT_H_SIZE); link_bundle_buf(l_ptr, bundler, buf); buf = bundler; @@ -1383,9 +1383,9 @@ again: if (!buf) return -ENOMEM; buf->next = NULL; - memcpy(buf->data, (unchar *)&fragm_hdr, INT_H_SIZE); + skb_copy_to_linear_data(buf, &fragm_hdr, INT_H_SIZE); hsz = msg_hdr_sz(hdr); - memcpy(buf->data + INT_H_SIZE, (unchar *)hdr, hsz); + skb_copy_to_linear_data_offset(buf, INT_H_SIZE, hdr, hsz); msg_dbg(buf_msg(buf), ">BUILD>"); /* Chop up message: */ @@ -1416,8 +1416,8 @@ error: return -EFAULT; } } else - memcpy(buf->data + fragm_crs, sect_crs, sz); - + skb_copy_to_linear_data_offset(buf, fragm_crs, + sect_crs, sz); sect_crs += sz; sect_rest -= sz; fragm_crs += sz; @@ -1442,7 +1442,7 @@ error: buf->next = NULL; prev->next = buf; - memcpy(buf->data, (unchar *)&fragm_hdr, INT_H_SIZE); + skb_copy_to_linear_data(buf, &fragm_hdr, INT_H_SIZE); fragm_crs = INT_H_SIZE; fragm_rest = fragm_sz; msg_dbg(buf_msg(buf)," >BUILD>"); @@ -2130,7 +2130,7 @@ void tipc_link_send_proto_msg(struct lin buf = l_ptr->proto_msg_queue; if (!buf) return; - memcpy(buf->data, (unchar *)msg, sizeof(l_ptr->proto_msg)); + skb_copy_to_linear_data(buf, msg, sizeof(l_ptr->proto_msg)); return; } msg_set_timestamp(msg, jiffies_to_msecs(jiffies)); @@ -2143,7 +2143,7 @@ void tipc_link_send_proto_msg(struct lin if (!buf) return; - memcpy(buf->data, (unchar *)msg, sizeof(l_ptr->proto_msg)); + skb_copy_to_linear_data(buf, msg, sizeof(l_ptr->proto_msg)); msg_set_size(buf_msg(buf), msg_size); if (tipc_bearer_send(l_ptr->b_ptr, buf, &l_ptr->media_addr)) { @@ -2319,8 +2319,8 @@ void tipc_link_tunnel(struct link *l_ptr "unable to send tunnel msg\n"); return; } - memcpy(buf->data, (unchar *)tunnel_hdr, INT_H_SIZE); - memcpy(buf->data + INT_H_SIZE, (unchar *)msg, length); + skb_copy_to_linear_data(buf, tunnel_hdr, INT_H_SIZE); + skb_copy_to_linear_data_offset(buf, INT_H_SIZE, msg, length); dbg("%c->%c:", l_ptr->b_ptr->net_plane, tunnel->b_ptr->net_plane); msg_dbg(buf_msg(buf), ">SEND>"); tipc_link_send_buf(tunnel, buf); @@ -2361,7 +2361,7 @@ void tipc_link_changeover(struct link *l buf = buf_acquire(INT_H_SIZE); if (buf) { - memcpy(buf->data, (unchar *)&tunnel_hdr, INT_H_SIZE); + skb_copy_to_linear_data(buf, &tunnel_hdr, INT_H_SIZE); msg_set_size(&tunnel_hdr, INT_H_SIZE); dbg("%c->%c:", l_ptr->b_ptr->net_plane, tunnel->b_ptr->net_plane); @@ -2426,8 +2426,9 @@ void tipc_link_send_duplicate(struct lin "unable to send duplicate msg\n"); return; } - memcpy(outbuf->data, (unchar *)&tunnel_hdr, INT_H_SIZE); - memcpy(outbuf->data + INT_H_SIZE, iter->data, length); + skb_copy_to_linear_data(outbuf, &tunnel_hdr, INT_H_SIZE); + skb_copy_to_linear_data_offset(outbuf, INT_H_SIZE, iter->data, + length); dbg("%c->%c:", l_ptr->b_ptr->net_plane, tunnel->b_ptr->net_plane); msg_dbg(buf_msg(outbuf), ">SEND>"); @@ -2457,7 +2458,7 @@ static struct sk_buff *buf_extract(struc eb = buf_acquire(size); if (eb) - memcpy(eb->data, (unchar *)msg, size); + skb_copy_to_linear_data(eb, msg, size); return eb; } @@ -2569,7 +2570,7 @@ void tipc_link_recv_bundle(struct sk_buf if (obuf == NULL) { warn("Link unable to unbundle message(s)\n"); break; - }; + } pos += align(msg_size(buf_msg(obuf))); msg_dbg(buf_msg(obuf), " /"); tipc_net_route_msg(obuf); @@ -2631,9 +2632,9 @@ int tipc_link_send_long_buf(struct link goto exit; } msg_set_size(&fragm_hdr, fragm_sz + INT_H_SIZE); - memcpy(fragm->data, (unchar *)&fragm_hdr, INT_H_SIZE); - memcpy(fragm->data + INT_H_SIZE, crs, fragm_sz); - + skb_copy_to_linear_data(fragm, &fragm_hdr, INT_H_SIZE); + skb_copy_to_linear_data_offset(fragm, INT_H_SIZE, crs, + fragm_sz); /* Send queued messages first, if any: */ l_ptr->stats.sent_fragments++; @@ -2733,8 +2734,8 @@ int tipc_link_recv_fragment(struct sk_bu if (pbuf != NULL) { pbuf->next = *pending; *pending = pbuf; - memcpy(pbuf->data, (unchar *)imsg, msg_data_sz(fragm)); - + skb_copy_to_linear_data(pbuf, imsg, + msg_data_sz(fragm)); /* Prepare buffer for subsequent fragments. */ set_long_msg_seqno(pbuf, long_msg_seq_no); @@ -2750,7 +2751,8 @@ int tipc_link_recv_fragment(struct sk_bu u32 fsz = get_fragm_size(pbuf); u32 crs = ((msg_fragm_no(fragm) - 1) * fsz); u32 exp_frags = get_expected_frags(pbuf) - 1; - memcpy(pbuf->data + crs, msg_data(fragm), dsz); + skb_copy_to_linear_data_offset(pbuf, crs, + msg_data(fragm), dsz); buf_discard(fbuf); /* Is message complete? */ diff -puN net/tipc/msg.h~git-net net/tipc/msg.h --- a/net/tipc/msg.h~git-net +++ a/net/tipc/msg.h @@ -786,15 +786,16 @@ static inline int msg_build(struct tipc_ *buf = buf_acquire(sz); if (!(*buf)) return -ENOMEM; - memcpy((*buf)->data, (unchar *)hdr, hsz); + skb_copy_to_linear_data(*buf, hdr, hsz); for (res = 1, cnt = 0; res && (cnt < num_sect); cnt++) { if (likely(usrmem)) res = !copy_from_user((*buf)->data + pos, msg_sect[cnt].iov_base, msg_sect[cnt].iov_len); else - memcpy((*buf)->data + pos, msg_sect[cnt].iov_base, - msg_sect[cnt].iov_len); + skb_copy_to_linear_data_offset(*buf, pos, + msg_sect[cnt].iov_base, + msg_sect[cnt].iov_len); pos += msg_sect[cnt].iov_len; } if (likely(res)) diff -puN net/tipc/netlink.c~git-net net/tipc/netlink.c --- a/net/tipc/netlink.c~git-net +++ a/net/tipc/netlink.c @@ -57,7 +57,7 @@ static int handle_cmd(struct sk_buff *sk if (rep_buf) { skb_push(rep_buf, hdr_space); - rep_nlh = (struct nlmsghdr *)rep_buf->data; + rep_nlh = nlmsg_hdr(rep_buf); memcpy(rep_nlh, req_nlh, hdr_space); rep_nlh->nlmsg_len = rep_buf->len; genlmsg_unicast(rep_buf, req_nlh->nlmsg_pid); diff -puN net/tipc/port.c~git-net net/tipc/port.c --- a/net/tipc/port.c~git-net +++ a/net/tipc/port.c @@ -464,7 +464,7 @@ int tipc_reject_msg(struct sk_buff *buf, msg_set_size(rmsg, data_sz + hdr_sz); msg_set_nametype(rmsg, msg_nametype(msg)); msg_set_nameinst(rmsg, msg_nameinst(msg)); - memcpy(rbuf->data + hdr_sz, msg_data(msg), data_sz); + skb_copy_to_linear_data_offset(rbuf, hdr_sz, msg_data(msg), data_sz); /* send self-abort message when rejecting on a connected port */ if (msg_connected(msg)) { @@ -1419,7 +1419,7 @@ int tipc_send_buf(u32 ref, struct sk_buf return -ENOMEM; skb_push(buf, hsz); - memcpy(buf->data, (unchar *)msg, hsz); + skb_copy_to_linear_data(buf, msg, hsz); destnode = msg_destnode(msg); p_ptr->publ.congested = 1; if (!tipc_port_congested(p_ptr)) { @@ -1555,7 +1555,7 @@ int tipc_forward_buf2name(u32 ref, if (skb_cow(buf, LONG_H_SIZE)) return -ENOMEM; skb_push(buf, LONG_H_SIZE); - memcpy(buf->data, (unchar *)msg, LONG_H_SIZE); + skb_copy_to_linear_data(buf, msg, LONG_H_SIZE); msg_dbg(buf_msg(buf),"PREP:"); if (likely(destport || destnode)) { p_ptr->sent++; @@ -1679,7 +1679,7 @@ int tipc_forward_buf2port(u32 ref, return -ENOMEM; skb_push(buf, DIR_MSG_H_SIZE); - memcpy(buf->data, (unchar *)msg, DIR_MSG_H_SIZE); + skb_copy_to_linear_data(buf, msg, DIR_MSG_H_SIZE); msg_dbg(msg, "buf2port: "); p_ptr->sent++; if (dest->node == tipc_own_addr) diff -puN net/tipc/socket.c~git-net net/tipc/socket.c --- a/net/tipc/socket.c~git-net +++ a/net/tipc/socket.c @@ -1020,7 +1020,7 @@ restart: if (!err) { buf_crs = (unsigned char *)(TIPC_SKB_CB(buf)->handle); - sz = buf->tail - buf_crs; + sz = skb_tail_pointer(buf) - buf_crs; needed = (buf_len - sz_copied); sz_to_copy = (sz <= needed) ? sz : needed; diff -puN net/unix/af_unix.c~git-net net/unix/af_unix.c --- a/net/unix/af_unix.c~git-net +++ a/net/unix/af_unix.c @@ -1319,7 +1319,7 @@ static int unix_dgram_sendmsg(struct kio unix_attach_fds(siocb->scm, skb); unix_get_secdata(siocb->scm, skb); - skb->h.raw = skb->data; + skb_reset_transport_header(skb); err = memcpy_fromiovec(skb_put(skb,len), msg->msg_iov, len); if (err) goto out_free; diff -puN net/wanrouter/wanmain.c~git-net net/wanrouter/wanmain.c --- a/net/wanrouter/wanmain.c~git-net +++ a/net/wanrouter/wanmain.c @@ -277,8 +277,8 @@ int wanrouter_encapsulate(struct sk_buff skb_push(skb, 7); skb->data[0] = 0; skb->data[1] = NLPID_SNAP; - memcpy(&skb->data[2], wanrouter_oui_ether, - sizeof(wanrouter_oui_ether)); + skb_copy_to_linear_data_offset(skb, 2, wanrouter_oui_ether, + sizeof(wanrouter_oui_ether)); *((unsigned short*)&skb->data[5]) = htons(type); break; @@ -339,7 +339,7 @@ __be16 wanrouter_type_trans(struct sk_bu skb->protocol = ethertype; skb->pkt_type = PACKET_HOST; /* Physically point to point */ skb_pull(skb, cnt); - skb->mac.raw = skb->data; + skb_reset_mac_header(skb); return ethertype; } diff -puN /dev/null net/wireless/Kconfig --- /dev/null +++ a/net/wireless/Kconfig @@ -0,0 +1,16 @@ +config CFG80211 + tristate "Improved wireless configuration API" + +config WIRELESS_EXT + bool "Wireless extensions" + default n + ---help--- + This option enables the legacy wireless extensions + (wireless network interface configuration via ioctls.) + + Wireless extensions will be replaced by cfg80211 and + will be required only by legacy drivers that implement + wireless extension handlers. + + Say N (if you can) unless you know you need wireless + extensions for external modules. diff -puN /dev/null net/wireless/Makefile --- /dev/null +++ a/net/wireless/Makefile @@ -0,0 +1,3 @@ +obj-$(CONFIG_CFG80211) += cfg80211.o + +cfg80211-y += core.o sysfs.o diff -puN /dev/null net/wireless/core.c --- /dev/null +++ a/net/wireless/core.c @@ -0,0 +1,209 @@ +/* + * This is the linux wireless configuration interface. + * + * Copyright 2006, 2007 Johannes Berg + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "core.h" +#include "sysfs.h" + +/* name for sysfs, %d is appended */ +#define PHY_NAME "phy" + +MODULE_AUTHOR("Johannes Berg"); +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("wireless configuration support"); + +/* RCU might be appropriate here since we usually + * only read the list, and that can happen quite + * often because we need to do it for each command */ +LIST_HEAD(cfg80211_drv_list); +DEFINE_MUTEX(cfg80211_drv_mutex); +static int wiphy_counter; + +/* for debugfs */ +static struct dentry *ieee80211_debugfs_dir; + +/* exported functions */ + +struct wiphy *wiphy_new(struct cfg80211_ops *ops, int sizeof_priv) +{ + struct cfg80211_registered_device *drv; + int alloc_size; + + alloc_size = sizeof(*drv) + sizeof_priv; + + drv = kzalloc(alloc_size, GFP_KERNEL); + if (!drv) + return NULL; + + drv->ops = ops; + + mutex_lock(&cfg80211_drv_mutex); + + if (unlikely(wiphy_counter<0)) { + /* ugh, wrapped! */ + kfree(drv); + return NULL; + } + drv->idx = wiphy_counter; + + /* give it a proper name */ + snprintf(drv->wiphy.dev.bus_id, BUS_ID_SIZE, + PHY_NAME "%d", drv->idx); + + /* now increase counter for the next time */ + wiphy_counter++; + mutex_unlock(&cfg80211_drv_mutex); + + mutex_init(&drv->mtx); + mutex_init(&drv->devlist_mtx); + INIT_LIST_HEAD(&drv->netdev_list); + + device_initialize(&drv->wiphy.dev); + drv->wiphy.dev.class = &ieee80211_class; + drv->wiphy.dev.platform_data = drv; + + return &drv->wiphy; +} +EXPORT_SYMBOL(wiphy_new); + +int wiphy_register(struct wiphy *wiphy) +{ + struct cfg80211_registered_device *drv = wiphy_to_dev(wiphy); + int res; + + mutex_lock(&cfg80211_drv_mutex); + + + res = device_add(&drv->wiphy.dev); + if (res) + goto out_unlock; + + list_add(&drv->list, &cfg80211_drv_list); + + /* add to debugfs */ + drv->wiphy.debugfsdir = + debugfs_create_dir(wiphy_name(&drv->wiphy), + ieee80211_debugfs_dir); + + res = 0; +out_unlock: + mutex_unlock(&cfg80211_drv_mutex); + return res; +} +EXPORT_SYMBOL(wiphy_register); + +void wiphy_unregister(struct wiphy *wiphy) +{ + struct cfg80211_registered_device *drv = wiphy_to_dev(wiphy); + + mutex_lock(&cfg80211_drv_mutex); + + /* hold registered driver mutex during list removal as well + * to make sure no commands are in progress at the moment */ + mutex_lock(&drv->mtx); + list_del(&drv->list); + mutex_unlock(&drv->mtx); + + device_del(&drv->wiphy.dev); + debugfs_remove(drv->wiphy.debugfsdir); + + mutex_unlock(&cfg80211_drv_mutex); +} +EXPORT_SYMBOL(wiphy_unregister); + +void cfg80211_dev_free(struct cfg80211_registered_device *drv) +{ + mutex_destroy(&drv->mtx); + mutex_destroy(&drv->devlist_mtx); + kfree(drv); +} + +void wiphy_free(struct wiphy *wiphy) +{ + put_device(&wiphy->dev); +} +EXPORT_SYMBOL(wiphy_free); + +static int cfg80211_netdev_notifier_call(struct notifier_block * nb, + unsigned long state, + void *ndev) +{ + struct net_device *dev = ndev; + struct cfg80211_registered_device *rdev; + + if (!dev->ieee80211_ptr) + return 0; + + rdev = wiphy_to_dev(dev->ieee80211_ptr->wiphy); + + switch (state) { + case NETDEV_REGISTER: + mutex_lock(&rdev->devlist_mtx); + list_add(&dev->ieee80211_ptr->list, &rdev->netdev_list); + if (sysfs_create_link(&dev->dev.kobj, &rdev->wiphy.dev.kobj, + "phy80211")) { + printk(KERN_ERR "wireless: failed to add phy80211 " + "symlink to netdev!\n"); + } + dev->ieee80211_ptr->netdev = dev; + mutex_unlock(&rdev->devlist_mtx); + break; + case NETDEV_UNREGISTER: + mutex_lock(&rdev->devlist_mtx); + if (!list_empty(&dev->ieee80211_ptr->list)) { + sysfs_remove_link(&dev->dev.kobj, "phy80211"); + list_del_init(&dev->ieee80211_ptr->list); + } + mutex_unlock(&rdev->devlist_mtx); + break; + } + + return 0; +} + +static struct notifier_block cfg80211_netdev_notifier = { + .notifier_call = cfg80211_netdev_notifier_call, +}; + +static int cfg80211_init(void) +{ + int err = wiphy_sysfs_init(); + if (err) + goto out_fail_sysfs; + + err = register_netdevice_notifier(&cfg80211_netdev_notifier); + if (err) + goto out_fail_notifier; + + ieee80211_debugfs_dir = debugfs_create_dir("ieee80211", NULL); + + return 0; + +out_fail_notifier: + wiphy_sysfs_exit(); +out_fail_sysfs: + return err; +} +module_init(cfg80211_init); + +static void cfg80211_exit(void) +{ + debugfs_remove(ieee80211_debugfs_dir); + unregister_netdevice_notifier(&cfg80211_netdev_notifier); + wiphy_sysfs_exit(); +} +module_exit(cfg80211_exit); diff -puN /dev/null net/wireless/core.h --- /dev/null +++ a/net/wireless/core.h @@ -0,0 +1,49 @@ +/* + * Wireless configuration interface internals. + * + * Copyright 2006, 2007 Johannes Berg + */ +#ifndef __NET_WIRELESS_CORE_H +#define __NET_WIRELESS_CORE_H +#include +#include +#include +#include +#include +#include + +struct cfg80211_registered_device { + struct cfg80211_ops *ops; + struct list_head list; + /* we hold this mutex during any call so that + * we cannot do multiple calls at once, and also + * to avoid the deregister call to proceed while + * any call is in progress */ + struct mutex mtx; + + /* wiphy index, internal only */ + int idx; + + /* associate netdev list */ + struct mutex devlist_mtx; + struct list_head netdev_list; + + /* must be last because of the way we do wiphy_priv(), + * and it should at least be aligned to NETDEV_ALIGN */ + struct wiphy wiphy __attribute__((__aligned__(NETDEV_ALIGN))); +}; + +static inline +struct cfg80211_registered_device *wiphy_to_dev(struct wiphy *wiphy) +{ + BUG_ON(!wiphy); + return container_of(wiphy, struct cfg80211_registered_device, wiphy); +} + +extern struct mutex cfg80211_drv_mutex; +extern struct list_head cfg80211_drv_list; + +/* free object */ +extern void cfg80211_dev_free(struct cfg80211_registered_device *drv); + +#endif /* __NET_WIRELESS_CORE_H */ diff -puN /dev/null net/wireless/sysfs.c --- /dev/null +++ a/net/wireless/sysfs.c @@ -0,0 +1,80 @@ +/* + * This file provides /sys/class/ieee80211// + * and some default attributes. + * + * Copyright 2005-2006 Jiri Benc + * Copyright 2006 Johannes Berg + * + * This file is GPLv2 as found in COPYING. + */ + +#include +#include +#include +#include +#include +#include +#include "sysfs.h" +#include "core.h" + +static inline struct cfg80211_registered_device *dev_to_rdev( + struct device *dev) +{ + return container_of(dev, struct cfg80211_registered_device, wiphy.dev); +} + +static ssize_t _show_index(struct device *dev, struct device_attribute *attr, + char *buf) +{ + return sprintf(buf, "%d\n", dev_to_rdev(dev)->idx); +} + +static ssize_t _show_permaddr(struct device *dev, + struct device_attribute *attr, + char *buf) +{ + char *addr = dev_to_rdev(dev)->wiphy.perm_addr; + + return sprintf(buf, "%.2x:%.2x:%.2x:%.2x:%.2x:%.2x\n", + addr[0], addr[1], addr[2], addr[3], addr[4], addr[5]); +} + +static struct device_attribute ieee80211_dev_attrs[] = { + __ATTR(index, S_IRUGO, _show_index, NULL), + __ATTR(macaddress, S_IRUGO, _show_permaddr, NULL), + {} +}; + +static void wiphy_dev_release(struct device *dev) +{ + struct cfg80211_registered_device *rdev = dev_to_rdev(dev); + + cfg80211_dev_free(rdev); +} + +static int wiphy_uevent(struct device *dev, char **envp, + int num_envp, char *buf, int size) +{ + /* TODO, we probably need stuff here */ + return 0; +} + +struct class ieee80211_class = { + .name = "ieee80211", + .owner = THIS_MODULE, + .dev_release = wiphy_dev_release, + .dev_attrs = ieee80211_dev_attrs, +#ifdef CONFIG_HOTPLUG + .dev_uevent = wiphy_uevent, +#endif +}; + +int wiphy_sysfs_init(void) +{ + return class_register(&ieee80211_class); +} + +void wiphy_sysfs_exit(void) +{ + class_unregister(&ieee80211_class); +} diff -puN /dev/null net/wireless/sysfs.h --- /dev/null +++ a/net/wireless/sysfs.h @@ -0,0 +1,9 @@ +#ifndef __WIRELESS_SYSFS_H +#define __WIRELESS_SYSFS_H + +extern int wiphy_sysfs_init(void); +extern void wiphy_sysfs_exit(void); + +extern struct class ieee80211_class; + +#endif /* __WIRELESS_SYSFS_H */ diff -puN net/x25/af_x25.c~git-net net/x25/af_x25.c --- a/net/x25/af_x25.c~git-net +++ a/net/x25/af_x25.c @@ -951,7 +951,7 @@ int x25_rx_call_request(struct sk_buff * * Incoming Call User Data. */ if (skb->len >= 0) { - memcpy(makex25->calluserdata.cuddata, skb->data, skb->len); + skb_copy_from_linear_data(skb, makex25->calluserdata.cuddata, skb->len); makex25->calluserdata.cudlength = skb->len; } @@ -1058,9 +1058,10 @@ static int x25_sendmsg(struct kiocb *ioc */ SOCK_DEBUG(sk, "x25_sendmsg: Copying user data\n"); - asmptr = skb->h.raw = skb_put(skb, len); + skb_reset_transport_header(skb); + skb_put(skb, len); - rc = memcpy_fromiovec(asmptr, msg->msg_iov, len); + rc = memcpy_fromiovec(skb_transport_header(skb), msg->msg_iov, len); if (rc) goto out_kfree_skb; @@ -1210,8 +1211,7 @@ static int x25_recvmsg(struct kiocb *ioc } } - skb->h.raw = skb->data; - + skb_reset_transport_header(skb); copied = skb->len; if (copied > size) { @@ -1280,6 +1280,12 @@ static int x25_ioctl(struct socket *sock rc = sock_get_timestamp(sk, (struct timeval __user *)argp); break; + case SIOCGSTAMPNS: + rc = -EINVAL; + if (sk) + rc = sock_get_timestampns(sk, + (struct timespec __user *)argp); + break; case SIOCGIFADDR: case SIOCSIFADDR: case SIOCGIFDSTADDR: @@ -1521,6 +1527,12 @@ static int compat_x25_ioctl(struct socke rc = compat_sock_get_timestamp(sk, (struct timeval __user*)argp); break; + case SIOCGSTAMPNS: + rc = -EINVAL; + if (sk) + rc = compat_sock_get_timestampns(sk, + (struct timespec __user*)argp); + break; case SIOCGIFADDR: case SIOCSIFADDR: case SIOCGIFDSTADDR: diff -puN net/x25/x25_dev.c~git-net net/x25/x25_dev.c --- a/net/x25/x25_dev.c~git-net +++ a/net/x25/x25_dev.c @@ -48,7 +48,7 @@ static int x25_receive_data(struct sk_bu if ((sk = x25_find_socket(lci, nb)) != NULL) { int queued = 1; - skb->h.raw = skb->data; + skb_reset_transport_header(skb); bh_lock_sock(sk); if (!sock_owned_by_user(sk)) { queued = x25_process_rx_frame(sk, skb); @@ -191,7 +191,7 @@ void x25_send_frame(struct sk_buff *skb, { unsigned char *dptr; - skb->nh.raw = skb->data; + skb_reset_network_header(skb); switch (nb->dev->type) { case ARPHRD_X25: diff -puN net/x25/x25_in.c~git-net net/x25/x25_in.c --- a/net/x25/x25_in.c~git-net +++ a/net/x25/x25_in.c @@ -53,17 +53,20 @@ static int x25_queue_rx_frame(struct soc skb_queue_tail(&x25->fragment_queue, skb); - skbn->h.raw = skbn->data; + skb_reset_transport_header(skbn); skbo = skb_dequeue(&x25->fragment_queue); - memcpy(skb_put(skbn, skbo->len), skbo->data, skbo->len); + skb_copy_from_linear_data(skbo, skb_put(skbn, skbo->len), + skbo->len); kfree_skb(skbo); while ((skbo = skb_dequeue(&x25->fragment_queue)) != NULL) { skb_pull(skbo, (x25->neighbour->extended) ? X25_EXT_MIN_LEN : X25_STD_MIN_LEN); - memcpy(skb_put(skbn, skbo->len), skbo->data, skbo->len); + skb_copy_from_linear_data(skbo, + skb_put(skbn, skbo->len), + skbo->len); kfree_skb(skbo); } @@ -112,8 +115,9 @@ static int x25_state1_machine(struct soc * Copy any Call User Data. */ if (skb->len >= 0) { - memcpy(x25->calluserdata.cuddata, skb->data, - skb->len); + skb_copy_from_linear_data(skb, + x25->calluserdata.cuddata, + skb->len); x25->calluserdata.cudlength = skb->len; } if (!sock_flag(sk, SOCK_DEAD)) diff -puN net/x25/x25_out.c~git-net net/x25/x25_out.c --- a/net/x25/x25_out.c~git-net +++ a/net/x25/x25_out.c @@ -61,7 +61,7 @@ int x25_output(struct sock *sk, struct s if (skb->len - header_len > max_len) { /* Save a copy of the Header */ - memcpy(header, skb->data, header_len); + skb_copy_from_linear_data(skb, header, header_len); skb_pull(skb, header_len); frontlen = skb_headroom(skb); @@ -84,12 +84,12 @@ int x25_output(struct sock *sk, struct s len = max_len > skb->len ? skb->len : max_len; /* Copy the user data */ - memcpy(skb_put(skbn, len), skb->data, len); + skb_copy_from_linear_data(skb, skb_put(skbn, len), len); skb_pull(skb, len); /* Duplicate the Header */ skb_push(skbn, header_len); - memcpy(skbn->data, header, header_len); + skb_copy_to_linear_data(skbn, header, header_len); if (skb->len > 0) { if (x25->neighbour->extended) diff -puN net/xfrm/xfrm_algo.c~git-net net/xfrm/xfrm_algo.c --- a/net/xfrm/xfrm_algo.c~git-net +++ a/net/xfrm/xfrm_algo.c @@ -612,175 +612,6 @@ EXPORT_SYMBOL_GPL(skb_icv_walk); #if defined(CONFIG_INET_ESP) || defined(CONFIG_INET_ESP_MODULE) || defined(CONFIG_INET6_ESP) || defined(CONFIG_INET6_ESP_MODULE) -/* Looking generic it is not used in another places. */ - -int -skb_to_sgvec(struct sk_buff *skb, struct scatterlist *sg, int offset, int len) -{ - int start = skb_headlen(skb); - int i, copy = start - offset; - int elt = 0; - - if (copy > 0) { - if (copy > len) - copy = len; - sg[elt].page = virt_to_page(skb->data + offset); - sg[elt].offset = (unsigned long)(skb->data + offset) % PAGE_SIZE; - sg[elt].length = copy; - elt++; - if ((len -= copy) == 0) - return elt; - offset += copy; - } - - for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { - int end; - - BUG_TRAP(start <= offset + len); - - end = start + skb_shinfo(skb)->frags[i].size; - if ((copy = end - offset) > 0) { - skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; - - if (copy > len) - copy = len; - sg[elt].page = frag->page; - sg[elt].offset = frag->page_offset+offset-start; - sg[elt].length = copy; - elt++; - if (!(len -= copy)) - return elt; - offset += copy; - } - start = end; - } - - if (skb_shinfo(skb)->frag_list) { - struct sk_buff *list = skb_shinfo(skb)->frag_list; - - for (; list; list = list->next) { - int end; - - BUG_TRAP(start <= offset + len); - - end = start + list->len; - if ((copy = end - offset) > 0) { - if (copy > len) - copy = len; - elt += skb_to_sgvec(list, sg+elt, offset - start, copy); - if ((len -= copy) == 0) - return elt; - offset += copy; - } - start = end; - } - } - BUG_ON(len); - return elt; -} -EXPORT_SYMBOL_GPL(skb_to_sgvec); - -/* Check that skb data bits are writable. If they are not, copy data - * to newly created private area. If "tailbits" is given, make sure that - * tailbits bytes beyond current end of skb are writable. - * - * Returns amount of elements of scatterlist to load for subsequent - * transformations and pointer to writable trailer skb. - */ - -int skb_cow_data(struct sk_buff *skb, int tailbits, struct sk_buff **trailer) -{ - int copyflag; - int elt; - struct sk_buff *skb1, **skb_p; - - /* If skb is cloned or its head is paged, reallocate - * head pulling out all the pages (pages are considered not writable - * at the moment even if they are anonymous). - */ - if ((skb_cloned(skb) || skb_shinfo(skb)->nr_frags) && - __pskb_pull_tail(skb, skb_pagelen(skb)-skb_headlen(skb)) == NULL) - return -ENOMEM; - - /* Easy case. Most of packets will go this way. */ - if (!skb_shinfo(skb)->frag_list) { - /* A little of trouble, not enough of space for trailer. - * This should not happen, when stack is tuned to generate - * good frames. OK, on miss we reallocate and reserve even more - * space, 128 bytes is fair. */ - - if (skb_tailroom(skb) < tailbits && - pskb_expand_head(skb, 0, tailbits-skb_tailroom(skb)+128, GFP_ATOMIC)) - return -ENOMEM; - - /* Voila! */ - *trailer = skb; - return 1; - } - - /* Misery. We are in troubles, going to mincer fragments... */ - - elt = 1; - skb_p = &skb_shinfo(skb)->frag_list; - copyflag = 0; - - while ((skb1 = *skb_p) != NULL) { - int ntail = 0; - - /* The fragment is partially pulled by someone, - * this can happen on input. Copy it and everything - * after it. */ - - if (skb_shared(skb1)) - copyflag = 1; - - /* If the skb is the last, worry about trailer. */ - - if (skb1->next == NULL && tailbits) { - if (skb_shinfo(skb1)->nr_frags || - skb_shinfo(skb1)->frag_list || - skb_tailroom(skb1) < tailbits) - ntail = tailbits + 128; - } - - if (copyflag || - skb_cloned(skb1) || - ntail || - skb_shinfo(skb1)->nr_frags || - skb_shinfo(skb1)->frag_list) { - struct sk_buff *skb2; - - /* Fuck, we are miserable poor guys... */ - if (ntail == 0) - skb2 = skb_copy(skb1, GFP_ATOMIC); - else - skb2 = skb_copy_expand(skb1, - skb_headroom(skb1), - ntail, - GFP_ATOMIC); - if (unlikely(skb2 == NULL)) - return -ENOMEM; - - if (skb1->sk) - skb_set_owner_w(skb2, skb1->sk); - - /* Looking around. Are we still alive? - * OK, link new skb, drop old one */ - - skb2->next = skb1->next; - *skb_p = skb2; - kfree_skb(skb1); - skb1 = skb2; - } - elt++; - *trailer = skb1; - skb_p = &skb1->next; - } - - return elt; -} -EXPORT_SYMBOL_GPL(skb_cow_data); - void *pskb_put(struct sk_buff *skb, struct sk_buff *tail, int len) { if (tail != skb) { diff -puN net/xfrm/xfrm_input.c~git-net net/xfrm/xfrm_input.c --- a/net/xfrm/xfrm_input.c~git-net +++ a/net/xfrm/xfrm_input.c @@ -62,7 +62,7 @@ int xfrm_parse_spi(struct sk_buff *skb, case IPPROTO_COMP: if (!pskb_may_pull(skb, sizeof(struct ip_comp_hdr))) return -EINVAL; - *spi = htonl(ntohs(*(__be16*)(skb->h.raw + 2))); + *spi = htonl(ntohs(*(__be16*)(skb_transport_header(skb) + 2))); *seq = 0; return 0; default: @@ -72,8 +72,8 @@ int xfrm_parse_spi(struct sk_buff *skb, if (!pskb_may_pull(skb, 16)) return -EINVAL; - *spi = *(__be32*)(skb->h.raw + offset); - *seq = *(__be32*)(skb->h.raw + offset_seq); + *spi = *(__be32*)(skb_transport_header(skb) + offset); + *seq = *(__be32*)(skb_transport_header(skb) + offset_seq); return 0; } EXPORT_SYMBOL(xfrm_parse_spi); diff -puN net/xfrm/xfrm_policy.c~git-net net/xfrm/xfrm_policy.c --- a/net/xfrm/xfrm_policy.c~git-net +++ a/net/xfrm/xfrm_policy.c @@ -268,7 +268,7 @@ static inline unsigned long make_jiffies static void xfrm_policy_timer(unsigned long data) { struct xfrm_policy *xp = (struct xfrm_policy*)data; - unsigned long now = (unsigned long)xtime.tv_sec; + unsigned long now = get_seconds(); long next = LONG_MAX; int warn = 0; int dir; @@ -690,7 +690,7 @@ int xfrm_policy_insert(int dir, struct x } policy->index = delpol ? delpol->index : xfrm_gen_index(policy->type, dir); hlist_add_head(&policy->byidx, xfrm_policy_byidx+idx_hash(policy->index)); - policy->curlft.add_time = (unsigned long)xtime.tv_sec; + policy->curlft.add_time = get_seconds(); policy->curlft.use_time = 0; if (!mod_timer(&policy->timer, jiffies + HZ)) xfrm_pol_hold(policy); @@ -1049,7 +1049,7 @@ static inline int policy_to_flow_dir(int return FLOW_DIR_OUT; case XFRM_POLICY_FWD: return FLOW_DIR_FWD; - }; + } } static struct xfrm_policy *xfrm_sk_policy_lookup(struct sock *sk, int dir, struct flowi *fl) @@ -1133,7 +1133,7 @@ int xfrm_sk_policy_insert(struct sock *s old_pol = sk->sk_policy[dir]; sk->sk_policy[dir] = pol; if (pol) { - pol->curlft.add_time = (unsigned long)xtime.tv_sec; + pol->curlft.add_time = get_seconds(); pol->index = xfrm_gen_index(pol->type, XFRM_POLICY_MAX+dir); __xfrm_policy_link(pol, XFRM_POLICY_MAX+dir); } @@ -1386,7 +1386,7 @@ restart: return 0; family = dst_orig->ops->family; - policy->curlft.use_time = (unsigned long)xtime.tv_sec; + policy->curlft.use_time = get_seconds(); pols[0] = policy; npols ++; xfrm_nr += pols[0]->xfrm_nr; @@ -1682,7 +1682,7 @@ int __xfrm_policy_check(struct sock *sk, return 1; } - pol->curlft.use_time = (unsigned long)xtime.tv_sec; + pol->curlft.use_time = get_seconds(); pols[0] = pol; npols ++; @@ -1694,7 +1694,7 @@ int __xfrm_policy_check(struct sock *sk, if (pols[1]) { if (IS_ERR(pols[1])) return 0; - pols[1]->curlft.use_time = (unsigned long)xtime.tv_sec; + pols[1]->curlft.use_time = get_seconds(); npols ++; } } diff -puN net/xfrm/xfrm_state.c~git-net net/xfrm/xfrm_state.c --- a/net/xfrm/xfrm_state.c~git-net +++ a/net/xfrm/xfrm_state.c @@ -233,7 +233,7 @@ static inline unsigned long make_jiffies static void xfrm_timer_handler(unsigned long data) { struct xfrm_state *x = (struct xfrm_state*)data; - unsigned long now = (unsigned long)xtime.tv_sec; + unsigned long now = get_seconds(); long next = LONG_MAX; int warn = 0; int err = 0; @@ -326,7 +326,7 @@ struct xfrm_state *xfrm_state_alloc(void init_timer(&x->rtimer); x->rtimer.function = xfrm_replay_timer_handler; x->rtimer.data = (unsigned long)x; - x->curlft.add_time = (unsigned long)xtime.tv_sec; + x->curlft.add_time = get_seconds(); x->lft.soft_byte_limit = XFRM_INF; x->lft.soft_packet_limit = XFRM_INF; x->lft.hard_byte_limit = XFRM_INF; @@ -458,7 +458,7 @@ static struct xfrm_state *__xfrm_state_l x->id.daddr.a6)) continue; break; - }; + } xfrm_state_hold(x); return x; @@ -493,7 +493,7 @@ static struct xfrm_state *__xfrm_state_l x->props.saddr.a6)) continue; break; - }; + } xfrm_state_hold(x); return x; @@ -722,7 +722,7 @@ static struct xfrm_state *__find_acq_cor (struct in6_addr *)saddr)) continue; break; - }; + } xfrm_state_hold(x); return x; @@ -755,7 +755,7 @@ static struct xfrm_state *__find_acq_cor ipv6_addr_copy((struct in6_addr *)x->id.daddr.a6, (struct in6_addr *)daddr); break; - }; + } x->km.state = XFRM_STATE_ACQ; x->id.proto = proto; @@ -1051,7 +1051,7 @@ EXPORT_SYMBOL(xfrm_state_update); int xfrm_state_check_expire(struct xfrm_state *x) { if (!x->curlft.use_time) - x->curlft.use_time = (unsigned long)xtime.tv_sec; + x->curlft.use_time = get_seconds(); if (x->km.state != XFRM_STATE_VALID) return -EINVAL; @@ -1667,37 +1667,17 @@ void xfrm_state_delete_tunnel(struct xfr } EXPORT_SYMBOL(xfrm_state_delete_tunnel); -/* - * This function is NOT optimal. For example, with ESP it will give an - * MTU that's usually two bytes short of being optimal. However, it will - * usually give an answer that's a multiple of 4 provided the input is - * also a multiple of 4. - */ int xfrm_state_mtu(struct xfrm_state *x, int mtu) { - int res = mtu; - - res -= x->props.header_len; - - for (;;) { - int m = res; - - if (m < 68) - return 68; - - spin_lock_bh(&x->lock); - if (x->km.state == XFRM_STATE_VALID && - x->type && x->type->get_max_size) - m = x->type->get_max_size(x, m); - else - m += x->props.header_len; - spin_unlock_bh(&x->lock); - - if (m <= mtu) - break; - res -= (m - mtu); - } + int res; + spin_lock_bh(&x->lock); + if (x->km.state == XFRM_STATE_VALID && + x->type && x->type->get_mtu) + res = x->type->get_mtu(x, mtu); + else + res = mtu; + spin_unlock_bh(&x->lock); return res; } diff -puN net/xfrm/xfrm_user.c~git-net net/xfrm/xfrm_user.c --- a/net/xfrm/xfrm_user.c~git-net +++ a/net/xfrm/xfrm_user.c @@ -71,7 +71,7 @@ static int verify_one_alg(struct rtattr default: return -EINVAL; - }; + } algp->alg_name[CRYPTO_MAX_ALG_NAME - 1] = '\0'; return 0; @@ -152,7 +152,7 @@ static int verify_newsa_info(struct xfrm default: goto out; - }; + } err = -EINVAL; switch (p->id.proto) { @@ -192,7 +192,7 @@ static int verify_newsa_info(struct xfrm default: goto out; - }; + } if ((err = verify_one_alg(xfrma, XFRMA_ALG_AUTH))) goto out; @@ -217,7 +217,7 @@ static int verify_newsa_info(struct xfrm default: goto out; - }; + } err = 0; @@ -576,7 +576,7 @@ static int dump_one_state(struct xfrm_st struct sk_buff *skb = sp->out_skb; struct xfrm_usersa_info *p; struct nlmsghdr *nlh; - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); if (sp->this_idx < sp->start_idx) goto out; @@ -621,14 +621,14 @@ static int dump_one_state(struct xfrm_st if (x->lastused) RTA_PUT(skb, XFRMA_LASTUSED, sizeof(x->lastused), &x->lastused); - nlh->nlmsg_len = skb->tail - b; + nlh->nlmsg_len = skb_tail_pointer(skb) - b; out: sp->this_idx++; return 0; nlmsg_failure: rtattr_failure: - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); return -1; } @@ -711,7 +711,7 @@ static int verify_userspi_info(struct xf default: return -EINVAL; - }; + } if (p->min > p->max) return -EINVAL; @@ -789,7 +789,7 @@ static int verify_policy_dir(u8 dir) default: return -EINVAL; - }; + } return 0; } @@ -805,7 +805,7 @@ static int verify_policy_type(u8 type) default: return -EINVAL; - }; + } return 0; } @@ -821,7 +821,7 @@ static int verify_newpolicy_info(struct default: return -EINVAL; - }; + } switch (p->action) { case XFRM_POLICY_ALLOW: @@ -830,7 +830,7 @@ static int verify_newpolicy_info(struct default: return -EINVAL; - }; + } switch (p->sel.family) { case AF_INET: @@ -845,7 +845,7 @@ static int verify_newpolicy_info(struct default: return -EINVAL; - }; + } return verify_policy_dir(p->dir); } @@ -912,7 +912,7 @@ static int validate_tmpl(int nr, struct #endif default: return -EINVAL; - }; + } } return 0; @@ -1157,7 +1157,7 @@ static int dump_one_policy(struct xfrm_p struct sk_buff *in_skb = sp->in_skb; struct sk_buff *skb = sp->out_skb; struct nlmsghdr *nlh; - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); if (sp->this_idx < sp->start_idx) goto out; @@ -1176,13 +1176,13 @@ static int dump_one_policy(struct xfrm_p if (copy_to_user_policy_type(xp->type, skb) < 0) goto nlmsg_failure; - nlh->nlmsg_len = skb->tail - b; + nlh->nlmsg_len = skb_tail_pointer(skb) - b; out: sp->this_idx++; return 0; nlmsg_failure: - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); return -1; } @@ -1330,7 +1330,7 @@ static int build_aevent(struct sk_buff * struct xfrm_aevent_id *id; struct nlmsghdr *nlh; struct xfrm_lifetime_cur ltime; - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); nlh = NLMSG_PUT(skb, c->pid, c->seq, XFRM_MSG_NEWAE, sizeof(*id)); id = NLMSG_DATA(nlh); @@ -1362,12 +1362,12 @@ static int build_aevent(struct sk_buff * RTA_PUT(skb,XFRMA_ETIMER_THRESH,sizeof(u32),&etimer); } - nlh->nlmsg_len = skb->tail - b; + nlh->nlmsg_len = skb_tail_pointer(skb) - b; return skb->len; rtattr_failure: nlmsg_failure: - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); return -1; } @@ -1744,7 +1744,7 @@ static int build_migrate(struct sk_buff struct xfrm_migrate *mp; struct xfrm_userpolicy_id *pol_id; struct nlmsghdr *nlh; - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); int i; nlh = NLMSG_PUT(skb, 0, 0, XFRM_MSG_MIGRATE, sizeof(*pol_id)); @@ -1764,10 +1764,10 @@ static int build_migrate(struct sk_buff goto nlmsg_failure; } - nlh->nlmsg_len = skb->tail - b; + nlh->nlmsg_len = skb_tail_pointer(skb) - b; return skb->len; nlmsg_failure: - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); return -1; } @@ -1852,53 +1852,36 @@ static struct xfrm_link { [XFRM_MSG_MIGRATE - XFRM_MSG_BASE] = { .doit = xfrm_do_migrate }, }; -static int xfrm_user_rcv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, int *errp) +static int xfrm_user_rcv_msg(struct sk_buff *skb, struct nlmsghdr *nlh) { struct rtattr *xfrma[XFRMA_MAX]; struct xfrm_link *link; int type, min_len; - if (!(nlh->nlmsg_flags & NLM_F_REQUEST)) - return 0; - type = nlh->nlmsg_type; - - /* A control message: ignore them */ - if (type < XFRM_MSG_BASE) - return 0; - - /* Unknown message: reply with EINVAL */ if (type > XFRM_MSG_MAX) - goto err_einval; + return -EINVAL; type -= XFRM_MSG_BASE; link = &xfrm_dispatch[type]; /* All operations require privileges, even GET */ - if (security_netlink_recv(skb, CAP_NET_ADMIN)) { - *errp = -EPERM; - return -1; - } + if (security_netlink_recv(skb, CAP_NET_ADMIN)) + return -EPERM; if ((type == (XFRM_MSG_GETSA - XFRM_MSG_BASE) || type == (XFRM_MSG_GETPOLICY - XFRM_MSG_BASE)) && (nlh->nlmsg_flags & NLM_F_DUMP)) { if (link->dump == NULL) - goto err_einval; - - if ((*errp = netlink_dump_start(xfrm_nl, skb, nlh, - link->dump, NULL)) != 0) { - return -1; - } + return -EINVAL; - netlink_queue_skip(nlh, skb); - return -1; + return netlink_dump_start(xfrm_nl, skb, nlh, link->dump, NULL); } memset(xfrma, 0, sizeof(xfrma)); if (nlh->nlmsg_len < (min_len = xfrm_msg_min[type])) - goto err_einval; + return -EINVAL; if (nlh->nlmsg_len > min_len) { int attrlen = nlh->nlmsg_len - NLMSG_ALIGN(min_len); @@ -1908,7 +1891,7 @@ static int xfrm_user_rcv_msg(struct sk_b unsigned short flavor = attr->rta_type; if (flavor) { if (flavor > XFRMA_MAX) - goto err_einval; + return -EINVAL; xfrma[flavor - 1] = attr; } attr = RTA_NEXT(attr, attrlen); @@ -1916,14 +1899,9 @@ static int xfrm_user_rcv_msg(struct sk_b } if (link->doit == NULL) - goto err_einval; - *errp = link->doit(skb, nlh, xfrma); - - return *errp; + return -EINVAL; -err_einval: - *errp = -EINVAL; - return -1; + return link->doit(skb, nlh, xfrma); } static void xfrm_netlink_rcv(struct sock *sk, int len) @@ -1942,7 +1920,7 @@ static int build_expire(struct sk_buff * { struct xfrm_user_expire *ue; struct nlmsghdr *nlh; - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); nlh = NLMSG_PUT(skb, c->pid, 0, XFRM_MSG_EXPIRE, sizeof(*ue)); @@ -1952,11 +1930,11 @@ static int build_expire(struct sk_buff * copy_to_user_state(x, &ue->state); ue->hard = (c->data.hard != 0) ? 1 : 0; - nlh->nlmsg_len = skb->tail - b; + nlh->nlmsg_len = skb_tail_pointer(skb) - b; return skb->len; nlmsg_failure: - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); return -1; } @@ -1999,7 +1977,7 @@ static int xfrm_notify_sa_flush(struct k struct xfrm_usersa_flush *p; struct nlmsghdr *nlh; struct sk_buff *skb; - unsigned char *b; + sk_buff_data_t b; int len = NLMSG_LENGTH(sizeof(struct xfrm_usersa_flush)); skb = alloc_skb(len, GFP_ATOMIC); @@ -2045,7 +2023,7 @@ static int xfrm_notify_sa(struct xfrm_st struct xfrm_usersa_id *id; struct nlmsghdr *nlh; struct sk_buff *skb; - unsigned char *b; + sk_buff_data_t b; int len = xfrm_sa_len(x); int headlen; @@ -2129,7 +2107,7 @@ static int build_acquire(struct sk_buff { struct xfrm_user_acquire *ua; struct nlmsghdr *nlh; - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); __u32 seq = xfrm_get_acqseq(); nlh = NLMSG_PUT(skb, 0, 0, XFRM_MSG_ACQUIRE, @@ -2153,11 +2131,11 @@ static int build_acquire(struct sk_buff if (copy_to_user_policy_type(xp->type, skb) < 0) goto nlmsg_failure; - nlh->nlmsg_len = skb->tail - b; + nlh->nlmsg_len = skb_tail_pointer(skb) - b; return skb->len; nlmsg_failure: - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); return -1; } @@ -2249,7 +2227,7 @@ static int build_polexpire(struct sk_buf struct xfrm_user_polexpire *upe; struct nlmsghdr *nlh; int hard = c->data.hard; - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); nlh = NLMSG_PUT(skb, c->pid, 0, XFRM_MSG_POLEXPIRE, sizeof(*upe)); upe = NLMSG_DATA(nlh); @@ -2264,11 +2242,11 @@ static int build_polexpire(struct sk_buf goto nlmsg_failure; upe->hard = !!hard; - nlh->nlmsg_len = skb->tail - b; + nlh->nlmsg_len = skb_tail_pointer(skb) - b; return skb->len; nlmsg_failure: - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); return -1; } @@ -2300,7 +2278,7 @@ static int xfrm_notify_policy(struct xfr struct xfrm_userpolicy_id *id; struct nlmsghdr *nlh; struct sk_buff *skb; - unsigned char *b; + sk_buff_data_t b; int len = RTA_SPACE(sizeof(struct xfrm_user_tmpl) * xp->xfrm_nr); int headlen; @@ -2357,7 +2335,7 @@ static int xfrm_notify_policy_flush(stru { struct nlmsghdr *nlh; struct sk_buff *skb; - unsigned char *b; + sk_buff_data_t b; int len = 0; #ifdef CONFIG_XFRM_SUB_POLICY len += RTA_SPACE(sizeof(struct xfrm_userpolicy_type)); @@ -2410,7 +2388,7 @@ static int build_report(struct sk_buff * { struct xfrm_user_report *ur; struct nlmsghdr *nlh; - unsigned char *b = skb->tail; + unsigned char *b = skb_tail_pointer(skb); nlh = NLMSG_PUT(skb, 0, 0, XFRM_MSG_REPORT, sizeof(*ur)); ur = NLMSG_DATA(nlh); @@ -2422,12 +2400,12 @@ static int build_report(struct sk_buff * if (addr) RTA_PUT(skb, XFRMA_COADDR, sizeof(*addr), addr); - nlh->nlmsg_len = skb->tail - b; + nlh->nlmsg_len = skb_tail_pointer(skb) - b; return skb->len; nlmsg_failure: rtattr_failure: - skb_trim(skb, b - skb->data); + nlmsg_trim(skb, b); return -1; } @@ -2466,7 +2444,7 @@ static int __init xfrm_user_init(void) printk(KERN_INFO "Initializing XFRM netlink socket\n"); nlsk = netlink_kernel_create(NETLINK_XFRM, XFRMNLGRP_MAX, - xfrm_netlink_rcv, THIS_MODULE); + xfrm_netlink_rcv, NULL, THIS_MODULE); if (nlsk == NULL) return -ENOMEM; rcu_assign_pointer(xfrm_nl, nlsk); diff -puN security/selinux/hooks.c~git-net security/selinux/hooks.c --- a/security/selinux/hooks.c~git-net +++ a/security/selinux/hooks.c @@ -2944,7 +2944,7 @@ static int selinux_parse_skb_ipv4(struct int offset, ihlen, ret = -EINVAL; struct iphdr _iph, *ih; - offset = skb->nh.raw - skb->data; + offset = skb_network_offset(skb); ih = skb_header_pointer(skb, offset, sizeof(_iph), &_iph); if (ih == NULL) goto out; @@ -3026,7 +3026,7 @@ static int selinux_parse_skb_ipv6(struct int ret = -EINVAL, offset; struct ipv6hdr _ipv6h, *ip6; - offset = skb->nh.raw - skb->data; + offset = skb_network_offset(skb); ip6 = skb_header_pointer(skb, offset, sizeof(_ipv6h), &_ipv6h); if (ip6 == NULL) goto out; @@ -3786,7 +3786,7 @@ static int selinux_nlmsg_perm(struct soc err = -EINVAL; goto out; } - nlh = (struct nlmsghdr *)skb->data; + nlh = nlmsg_hdr(skb); err = selinux_nlmsg_lookup(isec->sclass, nlh->nlmsg_type, &perm); if (err) { diff -puN security/selinux/netlink.c~git-net security/selinux/netlink.c --- a/security/selinux/netlink.c~git-net +++ a/security/selinux/netlink.c @@ -66,7 +66,7 @@ static void selnl_add_payload(struct nlm static void selnl_notify(int msgtype, void *data) { int len; - unsigned char *tmp; + sk_buff_data_t tmp; struct sk_buff *skb; struct nlmsghdr *nlh; @@ -104,7 +104,7 @@ void selnl_notify_policyload(u32 seqno) static int __init selnl_init(void) { - selnl = netlink_kernel_create(NETLINK_SELINUX, SELNLGRP_MAX, NULL, + selnl = netlink_kernel_create(NETLINK_SELINUX, SELNLGRP_MAX, NULL, NULL, THIS_MODULE); if (selnl == NULL) panic("SELinux: Cannot create netlink socket."); _