axs2k/anubis
Updated
•
1
commit_msg
stringlengths 1
24.2k
| commit_hash
stringlengths 2
84
⌀ | project
stringlengths 2
40
| source
stringclasses 4
values | labels
int64 0
1
| repo_url
stringlengths 26
70
⌀ | commit_url
stringlengths 74
118
⌀ | commit_date
stringlengths 25
25
⌀ |
---|---|---|---|---|---|---|---|
[glyphstring] Handle overflow with very long glyphstrings | 4de30e5500eaeb49f4bf0b7a07f718e149a2ed5e | pango | bigvul | 1 | null | null | null |
Don't shell-interpret \@SELECTED_URI (fixes FS#240) | 9cc39cb5c9396be013b5dc2ba7e4b3eaa647e975 | uzbl | bigvul | 1 | null | null | null |
Discard job body bytes if the job is too big.
Previously, a malicious user could craft a job payload and inject
beanstalk commands without the client application knowing. (An
extra-careful client library could check the size of the job body before
sending the put command, but most libraries do not do this, nor should
they have to.)
Reported by Graham Barr. | 2e8e8c6387ecdf5923dfc4d7718d18eba1b0873d | beanstalkd | bigvul | 1 | null | null | null |
Check if an SSL certificate matches the hostname of the server we are connecting to
git-svn-id: http://svn.irssi.org/repos/irssi/trunk@5104 dbcabf3a-b0e7-0310-adc4-f8d773084564 | 85bbc05b21678e80423815d2ef1dfe26208491ab | irssi-proxy | bigvul | 1 | null | null | null |
Use strncmp when checking for large ascii multigets. | d9cd01ede97f4145af9781d448c62a3318952719 | memcached | bigvul | 1 | null | null | null |
disable Uzbl javascript object because of security problem. | 1958b52d41cba96956dc1995660de49525ed1047 | uzbl | bigvul | 1 | null | null | null |
Do not attempt to decode APE file with no frames
This fixes invalid reads/writes with this sample:
http://packetstorm.linuxsecurity.com/1103-exploits/vlc105-dos.txt | 8312e3fc9041027a33c8bc667bb99740fdf41dd5 | ffmpeg | bigvul | 1 | null | null | null |
Flush the command buffer after switching to TLS.
Fixes a flaw similar to CVE-2011-0411. | 65c4d4ad331e94661de763e9b5304d28698999c4 | pure-ftpd | bigvul | 1 | null | null | null |
Fix buffer size checking
Yes, this means we've re-introduced CVE-2005-3534. Sigh. | 3ef52043861ab16352d49af89e048ba6339d6df8 | nbd | bigvul | 1 | null | null | null |
tools: hv: Netlink source address validation allows DoS
The source code without this patch caused hypervkvpd to exit when it processed
a spoofed Netlink packet which has been sent from an untrusted local user.
Now Netlink messages with a non-zero nl_pid source address are ignored
and a warning is printed into the syslog.
Signed-off-by: Tomas Hozza <[email protected]>
Acked-by: K. Y. Srinivasan <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]> | 95a69adab9acfc3981c504737a2b6578e4d846ef | linux | bigvul | 1 | null | null | null |
mm/hotplug: correctly add new zone to all other nodes' zone lists
When online_pages() is called to add new memory to an empty zone, it
rebuilds all zone lists by calling build_all_zonelists(). But there's a
bug which prevents the new zone to be added to other nodes' zone lists.
online_pages() {
build_all_zonelists()
.....
node_set_state(zone_to_nid(zone), N_HIGH_MEMORY)
}
Here the node of the zone is put into N_HIGH_MEMORY state after calling
build_all_zonelists(), but build_all_zonelists() only adds zones from
nodes in N_HIGH_MEMORY state to the fallback zone lists.
build_all_zonelists()
->__build_all_zonelists()
->build_zonelists()
->find_next_best_node()
->for_each_node_state(n, N_HIGH_MEMORY)
So memory in the new zone will never be used by other nodes, and it may
cause strange behavor when system is under memory pressure. So put node
into N_HIGH_MEMORY state before calling build_all_zonelists().
Signed-off-by: Jianguo Wu <[email protected]>
Signed-off-by: Jiang Liu <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Rusty Russell <[email protected]>
Cc: Yinghai Lu <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: KAMEZAWA Hiroyuki <[email protected]>
Cc: KOSAKI Motohiro <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Keping Chen <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]> | 08dff7b7d629807dbb1f398c68dd9cd58dd657a1 | linux | bigvul | 1 | null | null | null |
net: fix divide by zero in tcp algorithm illinois
Reading TCP stats when using TCP Illinois congestion control algorithm
can cause a divide by zero kernel oops.
The division by zero occur in tcp_illinois_info() at:
do_div(t, ca->cnt_rtt);
where ca->cnt_rtt can become zero (when rtt_reset is called)
Steps to Reproduce:
1. Register tcp_illinois:
# sysctl -w net.ipv4.tcp_congestion_control=illinois
2. Monitor internal TCP information via command "ss -i"
# watch -d ss -i
3. Establish new TCP conn to machine
Either it fails at the initial conn, or else it needs to wait
for a loss or a reset.
This is only related to reading stats. The function avg_delay() also
performs the same divide, but is guarded with a (ca->cnt_rtt > 0) at its
calling point in update_params(). Thus, simply fix tcp_illinois_info().
Function tcp_illinois_info() / get_info() is called without
socket lock. Thus, eliminate any race condition on ca->cnt_rtt
by using a local stack variable. Simply reuse info.tcpv_rttcnt,
as its already set to ca->cnt_rtt.
Function avg_delay() is not affected by this race condition, as
its called with the socket lock.
Cc: Petr Matousek <[email protected]>
Signed-off-by: Jesper Dangaard Brouer <[email protected]>
Acked-by: Eric Dumazet <[email protected]>
Acked-by: Stephen Hemminger <[email protected]>
Signed-off-by: David S. Miller <[email protected]> | 8f363b77ee4fbf7c3bbcf5ec2c5ca482d396d664 | linux | bigvul | 1 | null | null | null |
ext4: race-condition protection for ext4_convert_unwritten_extents_endio
We assumed that at the time we call ext4_convert_unwritten_extents_endio()
extent in question is fully inside [map.m_lblk, map->m_len] because
it was already split during submission. But this may not be true due to
a race between writeback vs fallocate.
If extent in question is larger than requested we will split it again.
Special precautions should being done if zeroout required because
[map.m_lblk, map->m_len] already contains valid data.
Signed-off-by: Dmitry Monakhov <[email protected]>
Signed-off-by: "Theodore Ts'o" <[email protected]>
Cc: [email protected] | dee1f973ca341c266229faa5a1a5bb268bed3531 | linux | bigvul | 1 | null | null | null |
Fix order of arguments to compat_put_time[spec|val]
Commit 644595f89620 ("compat: Handle COMPAT_USE_64BIT_TIME in
net/socket.c") introduced a bug where the helper functions to take
either a 64-bit or compat time[spec|val] got the arguments in the wrong
order, passing the kernel stack pointer off as a user pointer (and vice
versa).
Because of the user address range check, that in turn then causes an
EFAULT due to the user pointer range checking failing for the kernel
address. Incorrectly resuling in a failed system call for 32-bit
processes with a 64-bit kernel.
On odder architectures like HP-PA (with separate user/kernel address
spaces), it can be used read kernel memory.
Signed-off-by: Mikulas Patocka <[email protected]>
Cc: [email protected]
Signed-off-by: Linus Torvalds <[email protected]> | ed6fe9d614fc1bca95eb8c0ccd0e92db00ef9d5d | linux | bigvul | 1 | null | null | null |
ipv6: discard overlapping fragment
RFC5722 prohibits reassembling fragments when some data overlaps.
Bug spotted by Zhang Zuotao <[email protected]>.
Signed-off-by: Nicolas Dichtel <[email protected]>
Signed-off-by: David S. Miller <[email protected]> | 70789d7052239992824628db8133de08dc78e593 | linux | bigvul | 1 | null | null | null |
inet: add RCU protection to inet->opt
We lack proper synchronization to manipulate inet->opt ip_options
Problem is ip_make_skb() calls ip_setup_cork() and
ip_setup_cork() possibly makes a copy of ipc->opt (struct ip_options),
without any protection against another thread manipulating inet->opt.
Another thread can change inet->opt pointer and free old one under us.
Use RCU to protect inet->opt (changed to inet->inet_opt).
Instead of handling atomic refcounts, just copy ip_options when
necessary, to avoid cache line dirtying.
We cant insert an rcu_head in struct ip_options since its included in
skb->cb[], so this patch is large because I had to introduce a new
ip_options_rcu structure.
Signed-off-by: Eric Dumazet <[email protected]>
Cc: Herbert Xu <[email protected]>
Signed-off-by: David S. Miller <[email protected]> | f6d8bd051c391c1c0458a30b2a7abcd939329259 | linux | bigvul | 1 | null | null | null |
Fixed possibility of Unsolicited Dialback Attacks | aabcffae560d5fd00cd1d2ffce5d760353cf0a4d | jabberd2 | bigvul | 1 | null | null | null |
af_netlink: force credentials passing [CVE-2012-3520]
Pablo Neira Ayuso discovered that avahi and
potentially NetworkManager accept spoofed Netlink messages because of a
kernel bug. The kernel passes all-zero SCM_CREDENTIALS ancillary data
to the receiver if the sender did not provide such data, instead of not
including any such data at all or including the correct data from the
peer (as it is the case with AF_UNIX).
This bug was introduced in commit 16e572626961
(af_unix: dont send SCM_CREDENTIALS by default)
This patch forces passing credentials for netlink, as
before the regression.
Another fix would be to not add SCM_CREDENTIALS in
netlink messages if not provided by the sender, but it
might break some programs.
With help from Florian Weimer & Petr Matousek
This issue is designated as CVE-2012-3520
Signed-off-by: Eric Dumazet <[email protected]>
Cc: Petr Matousek <[email protected]>
Cc: Florian Weimer <[email protected]>
Cc: Pablo Neira Ayuso <[email protected]>
Signed-off-by: David S. Miller <[email protected]> | e0e3cea46d31d23dc40df0a49a7a2c04fe8edfea | linux | bigvul | 1 | null | null | null |
mm: Hold a file reference in madvise_remove
Otherwise the code races with munmap (causing a use-after-free
of the vma) or with close (causing a use-after-free of the struct
file).
The bug was introduced by commit 90ed52ebe481 ("[PATCH] holepunch: fix
mmap_sem i_mutex deadlock")
Cc: Hugh Dickins <[email protected]>
Cc: Miklos Szeredi <[email protected]>
Cc: Badari Pulavarty <[email protected]>
Cc: Nick Piggin <[email protected]>
Cc: [email protected]
Signed-off-by: Andy Lutomirski <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]> | 9ab4233dd08036fe34a89c7dc6f47a8bf2eb29eb | linux | bigvul | 1 | null | null | null |
[PATCH] xacct_add_tsk: fix pure theoretical ->mm use-after-free
Paranoid fix. The task can free its ->mm after the 'if (p->mm)' check.
Signed-off-by: Oleg Nesterov <[email protected]>
Cc: Shailabh Nagar <[email protected]>
Cc: Balbir Singh <[email protected]>
Cc: Jay Lan <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]> | f0ec1aaf54caddd21c259aea8b2ecfbde4ee4fb9 | linux | bigvul | 1 | null | null | null |
rds: set correct msg_namelen
Jay Fenlason ([email protected]) found a bug,
that recvfrom() on an RDS socket can return the contents of random kernel
memory to userspace if it was called with a address length larger than
sizeof(struct sockaddr_in).
rds_recvmsg() also fails to set the addr_len paramater properly before
returning, but that's just a bug.
There are also a number of cases wher recvfrom() can return an entirely bogus
address. Anything in rds_recvmsg() that returns a non-negative value but does
not go through the "sin = (struct sockaddr_in *)msg->msg_name;" code path
at the end of the while(1) loop will return up to 128 bytes of kernel memory
to userspace.
And I write two test programs to reproduce this bug, you will see that in
rds_server, fromAddr will be overwritten and the following sock_fd will be
destroyed.
Yes, it is the programmer's fault to set msg_namelen incorrectly, but it is
better to make the kernel copy the real length of address to user space in
such case.
How to run the test programs ?
I test them on 32bit x86 system, 3.5.0-rc7.
1 compile
gcc -o rds_client rds_client.c
gcc -o rds_server rds_server.c
2 run ./rds_server on one console
3 run ./rds_client on another console
4 you will see something like:
server is waiting to receive data...
old socket fd=3
server received data from client:data from client
msg.msg_namelen=32
new socket fd=-1067277685
sendmsg()
: Bad file descriptor
/***************** rds_client.c ********************/
int main(void)
{
int sock_fd;
struct sockaddr_in serverAddr;
struct sockaddr_in toAddr;
char recvBuffer[128] = "data from client";
struct msghdr msg;
struct iovec iov;
sock_fd = socket(AF_RDS, SOCK_SEQPACKET, 0);
if (sock_fd < 0) {
perror("create socket error\n");
exit(1);
}
memset(&serverAddr, 0, sizeof(serverAddr));
serverAddr.sin_family = AF_INET;
serverAddr.sin_addr.s_addr = inet_addr("127.0.0.1");
serverAddr.sin_port = htons(4001);
if (bind(sock_fd, (struct sockaddr*)&serverAddr, sizeof(serverAddr)) < 0) {
perror("bind() error\n");
close(sock_fd);
exit(1);
}
memset(&toAddr, 0, sizeof(toAddr));
toAddr.sin_family = AF_INET;
toAddr.sin_addr.s_addr = inet_addr("127.0.0.1");
toAddr.sin_port = htons(4000);
msg.msg_name = &toAddr;
msg.msg_namelen = sizeof(toAddr);
msg.msg_iov = &iov;
msg.msg_iovlen = 1;
msg.msg_iov->iov_base = recvBuffer;
msg.msg_iov->iov_len = strlen(recvBuffer) + 1;
msg.msg_control = 0;
msg.msg_controllen = 0;
msg.msg_flags = 0;
if (sendmsg(sock_fd, &msg, 0) == -1) {
perror("sendto() error\n");
close(sock_fd);
exit(1);
}
printf("client send data:%s\n", recvBuffer);
memset(recvBuffer, '\0', 128);
msg.msg_name = &toAddr;
msg.msg_namelen = sizeof(toAddr);
msg.msg_iov = &iov;
msg.msg_iovlen = 1;
msg.msg_iov->iov_base = recvBuffer;
msg.msg_iov->iov_len = 128;
msg.msg_control = 0;
msg.msg_controllen = 0;
msg.msg_flags = 0;
if (recvmsg(sock_fd, &msg, 0) == -1) {
perror("recvmsg() error\n");
close(sock_fd);
exit(1);
}
printf("receive data from server:%s\n", recvBuffer);
close(sock_fd);
return 0;
}
/***************** rds_server.c ********************/
int main(void)
{
struct sockaddr_in fromAddr;
int sock_fd;
struct sockaddr_in serverAddr;
unsigned int addrLen;
char recvBuffer[128];
struct msghdr msg;
struct iovec iov;
sock_fd = socket(AF_RDS, SOCK_SEQPACKET, 0);
if(sock_fd < 0) {
perror("create socket error\n");
exit(0);
}
memset(&serverAddr, 0, sizeof(serverAddr));
serverAddr.sin_family = AF_INET;
serverAddr.sin_addr.s_addr = inet_addr("127.0.0.1");
serverAddr.sin_port = htons(4000);
if (bind(sock_fd, (struct sockaddr*)&serverAddr, sizeof(serverAddr)) < 0) {
perror("bind error\n");
close(sock_fd);
exit(1);
}
printf("server is waiting to receive data...\n");
msg.msg_name = &fromAddr;
/*
* I add 16 to sizeof(fromAddr), ie 32,
* and pay attention to the definition of fromAddr,
* recvmsg() will overwrite sock_fd,
* since kernel will copy 32 bytes to userspace.
*
* If you just use sizeof(fromAddr), it works fine.
* */
msg.msg_namelen = sizeof(fromAddr) + 16;
/* msg.msg_namelen = sizeof(fromAddr); */
msg.msg_iov = &iov;
msg.msg_iovlen = 1;
msg.msg_iov->iov_base = recvBuffer;
msg.msg_iov->iov_len = 128;
msg.msg_control = 0;
msg.msg_controllen = 0;
msg.msg_flags = 0;
while (1) {
printf("old socket fd=%d\n", sock_fd);
if (recvmsg(sock_fd, &msg, 0) == -1) {
perror("recvmsg() error\n");
close(sock_fd);
exit(1);
}
printf("server received data from client:%s\n", recvBuffer);
printf("msg.msg_namelen=%d\n", msg.msg_namelen);
printf("new socket fd=%d\n", sock_fd);
strcat(recvBuffer, "--data from server");
if (sendmsg(sock_fd, &msg, 0) == -1) {
perror("sendmsg()\n");
close(sock_fd);
exit(1);
}
}
close(sock_fd);
return 0;
}
Signed-off-by: Weiping Pan <[email protected]>
Signed-off-by: David S. Miller <[email protected]> | 06b6a1cf6e776426766298d055bb3991957d90a7 | linux | bigvul | 1 | null | null | null |
sfc: Fix maximum number of TSO segments and minimum TX queue size
[ Upstream commit 7e6d06f0de3f74ca929441add094518ae332257c ]
Currently an skb requiring TSO may not fit within a minimum-size TX
queue. The TX queue selected for the skb may stall and trigger the TX
watchdog repeatedly (since the problem skb will be retried after the
TX reset). This issue is designated as CVE-2012-3412.
Set the maximum number of TSO segments for our devices to 100. This
should make no difference to behaviour unless the actual MSS is less
than about 700. Increase the minimum TX queue size accordingly to
allow for 2 worst-case skbs, so that there will definitely be space
to add an skb after we wake a queue.
To avoid invalidating existing configurations, change
efx_ethtool_set_ringparam() to fix up values that are too small rather
than returning -EINVAL.
Signed-off-by: Ben Hutchings <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]> | 68cb695ccecf949d48949e72f8ce591fdaaa325c | linux | bigvul | 1 | null | null | null |
udf: Avoid run away loop when partition table length is corrupted
Check provided length of partition table so that (possibly maliciously)
corrupted partition table cannot cause accessing data beyond current buffer.
Signed-off-by: Jan Kara <[email protected]> | adee11b2085bee90bd8f4f52123ffb07882d6256 | linux | bigvul | 1 | null | null | null |
epoll: clear the tfile_check_list on -ELOOP
An epoll_ctl(,EPOLL_CTL_ADD,,) operation can return '-ELOOP' to prevent
circular epoll dependencies from being created. However, in that case we
do not properly clear the 'tfile_check_list'. Thus, add a call to
clear_tfile_check_list() for the -ELOOP case.
Signed-off-by: Jason Baron <[email protected]>
Reported-by: Yurij M. Plotnikov <[email protected]>
Cc: Nelson Elhage <[email protected]>
Cc: Davide Libenzi <[email protected]>
Tested-by: Alexandra N. Kossovsky <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]> | 13d518074a952d33d47c428419693f63389547e9 | linux | bigvul | 1 | null | null | null |
cred: copy_process() should clear child->replacement_session_keyring
keyctl_session_to_parent(task) sets ->replacement_session_keyring,
it should be processed and cleared by key_replace_session_keyring().
However, this task can fork before it notices TIF_NOTIFY_RESUME and
the new child gets the bogus ->replacement_session_keyring copied by
dup_task_struct(). This is obviously wrong and, if nothing else, this
leads to put_cred(already_freed_cred).
change copy_creds() to clear this member. If copy_process() fails
before this point the wrong ->replacement_session_keyring doesn't
matter, exit_creds() won't be called.
Cc: <[email protected]>
Signed-off-by: Oleg Nesterov <[email protected]>
Acked-by: David Howells <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]> | 79549c6dfda0603dba9a70a53467ce62d9335c33 | linux | bigvul | 1 | null | null | null |
netfilter: nf_conntrack_reasm: properly handle packets fragmented into a single fragment
When an ICMPV6_PKT_TOOBIG message is received with a MTU below 1280,
all further packets include a fragment header.
Unlike regular defragmentation, conntrack also needs to "reassemble"
those fragments in order to obtain a packet without the fragment
header for connection tracking. Currently nf_conntrack_reasm checks
whether a fragment has either IP6_MF set or an offset != 0, which
makes it ignore those fragments.
Remove the invalid check and make reassembly handle fragment queues
containing only a single fragment.
Reported-and-tested-by: Ulrich Weber <[email protected]>
Signed-off-by: Patrick McHardy <[email protected]> | 9e2dcf72023d1447f09c47d77c99b0c49659e5ce | linux | bigvul | 1 | null | null | null |
Avoid overflowing allocation size in CallMalloc()
The wraparound could happen if USE_MAGIC_HEADERS is enabled. | 1a759756639ab7543b650a10c2d77a0ffc7a2000 | nedmalloc | bigvul | 1 | null | null | null |
Fix calloc() overflow
* malloc.c (calloc): Check multiplication overflow in calloc(),
assuming REDIRECT_MALLOC. | e10c1eb9908c2774c16b3148b30d2f3823d66a9a | bdwgc | bigvul | 1 | null | null | null |
Tools: hv: verify origin of netlink connector message
The SuSE security team suggested to use recvfrom instead of recv to be
certain that the connector message is originated from kernel.
CVE-2012-2669
Signed-off-by: Olaf Hering <[email protected]>
Signed-off-by: Marcus Meissner <[email protected]>
Signed-off-by: Sebastian Krahmer <[email protected]>
Signed-off-by: K. Y. Srinivasan <[email protected]>
Cc: stable <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]> | bcc2c9c3fff859e0eb019fe6fec26f9b8eba795c | linux | bigvul | 1 | null | null | null |
hugetlb: fix resv_map leak in error path
When called for anonymous (non-shared) mappings, hugetlb_reserve_pages()
does a resv_map_alloc(). It depends on code in hugetlbfs's
vm_ops->close() to release that allocation.
However, in the mmap() failure path, we do a plain unmap_region() without
the remove_vma() which actually calls vm_ops->close().
This is a decent fix. This leak could get reintroduced if new code (say,
after hugetlb_reserve_pages() in hugetlbfs_file_mmap()) decides to return
an error. But, I think it would have to unroll the reservation anyway.
Christoph's test case:
http://marc.info/?l=linux-mm&m=133728900729735
This patch applies to 3.4 and later. A version for earlier kernels is at
https://lkml.org/lkml/2012/5/22/418.
Signed-off-by: Dave Hansen <[email protected]>
Acked-by: Mel Gorman <[email protected]>
Acked-by: KOSAKI Motohiro <[email protected]>
Reported-by: Christoph Lameter <[email protected]>
Tested-by: Christoph Lameter <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: <[email protected]> [2.6.32+]
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]> | c50ac050811d6485616a193eb0f37bfbd191cc89 | linux | bigvul | 1 | null | null | null |
Cap escape sequence parameters to prevent long loops.
Fixes #271 github issue. | 9791768705528e911bfca6c4d8aa88139035060e | mosh | bigvul | 1 | null | null | null |
drm/i915: fix integer overflow in i915_gem_do_execbuffer()
On 32-bit systems, a large args->num_cliprects from userspace via ioctl
may overflow the allocation size, leading to out-of-bounds access.
This vulnerability was introduced in commit 432e58ed ("drm/i915: Avoid
allocation for execbuffer object list").
Signed-off-by: Xi Wang <[email protected]>
Reviewed-by: Chris Wilson <[email protected]>
Cc: [email protected]
Signed-off-by: Daniel Vetter <[email protected]> | 44afb3a04391a74309d16180d1e4f8386fdfa745 | linux | bigvul | 1 | null | null | null |
drm/i915: fix integer overflow in i915_gem_execbuffer2()
On 32-bit systems, a large args->buffer_count from userspace via ioctl
may overflow the allocation size, leading to out-of-bounds access.
This vulnerability was introduced in commit 8408c282 ("drm/i915:
First try a normal large kmalloc for the temporary exec buffers").
Signed-off-by: Xi Wang <[email protected]>
Reviewed-by: Chris Wilson <[email protected]>
Cc: [email protected]
Signed-off-by: Daniel Vetter <[email protected]> | ed8cd3b2cd61004cab85380c52b1817aca1ca49b | linux | bigvul | 1 | null | null | null |
Fix length of buffer copied in __nfs4_get_acl_uncached
_copy_from_pages() used to copy data from the temporary buffer to the
user passed buffer is passed the wrong size parameter when copying
data. res.acl_len contains both the bitmap and acl lenghts while
acl_len contains the acl length after adjusting for the bitmap size.
Signed-off-by: Sachin Prabhu <[email protected]>
Signed-off-by: Trond Myklebust <[email protected]> | 20e0fa98b751facf9a1101edaefbc19c82616a68 | linux | bigvul | 1 | null | null | null |
mm: pmd_read_atomic: fix 32bit PAE pmd walk vs pmd_populate SMP race condition
When holding the mmap_sem for reading, pmd_offset_map_lock should only
run on a pmd_t that has been read atomically from the pmdp pointer,
otherwise we may read only half of it leading to this crash.
PID: 11679 TASK: f06e8000 CPU: 3 COMMAND: "do_race_2_panic"
#0 [f06a9dd8] crash_kexec at c049b5ec
#1 [f06a9e2c] oops_end at c083d1c2
#2 [f06a9e40] no_context at c0433ded
#3 [f06a9e64] bad_area_nosemaphore at c043401a
#4 [f06a9e6c] __do_page_fault at c0434493
#5 [f06a9eec] do_page_fault at c083eb45
#6 [f06a9f04] error_code (via page_fault) at c083c5d5
EAX: 01fb470c EBX: fff35000 ECX: 00000003 EDX: 00000100 EBP:
00000000
DS: 007b ESI: 9e201000 ES: 007b EDI: 01fb4700 GS: 00e0
CS: 0060 EIP: c083bc14 ERR: ffffffff EFLAGS: 00010246
#7 [f06a9f38] _spin_lock at c083bc14
#8 [f06a9f44] sys_mincore at c0507b7d
#9 [f06a9fb0] system_call at c083becd
start len
EAX: ffffffda EBX: 9e200000 ECX: 00001000 EDX: 6228537f
DS: 007b ESI: 00000000 ES: 007b EDI: 003d0f00
SS: 007b ESP: 62285354 EBP: 62285388 GS: 0033
CS: 0073 EIP: 00291416 ERR: 000000da EFLAGS: 00000286
This should be a longstanding bug affecting x86 32bit PAE without THP.
Only archs with 64bit large pmd_t and 32bit unsigned long should be
affected.
With THP enabled the barrier() in pmd_none_or_trans_huge_or_clear_bad()
would partly hide the bug when the pmd transition from none to stable,
by forcing a re-read of the *pmd in pmd_offset_map_lock, but when THP is
enabled a new set of problem arises by the fact could then transition
freely in any of the none, pmd_trans_huge or pmd_trans_stable states.
So making the barrier in pmd_none_or_trans_huge_or_clear_bad()
unconditional isn't good idea and it would be a flakey solution.
This should be fully fixed by introducing a pmd_read_atomic that reads
the pmd in order with THP disabled, or by reading the pmd atomically
with cmpxchg8b with THP enabled.
Luckily this new race condition only triggers in the places that must
already be covered by pmd_none_or_trans_huge_or_clear_bad() so the fix
is localized there but this bug is not related to THP.
NOTE: this can trigger on x86 32bit systems with PAE enabled with more
than 4G of ram, otherwise the high part of the pmd will never risk to be
truncated because it would be zero at all times, in turn so hiding the
SMP race.
This bug was discovered and fully debugged by Ulrich, quote:
----
[..]
pmd_none_or_trans_huge_or_clear_bad() loads the content of edx and
eax.
496 static inline int pmd_none_or_trans_huge_or_clear_bad(pmd_t
*pmd)
497 {
498 /* depend on compiler for an atomic pmd read */
499 pmd_t pmdval = *pmd;
// edi = pmd pointer
0xc0507a74 <sys_mincore+548>: mov 0x8(%esp),%edi
...
// edx = PTE page table high address
0xc0507a84 <sys_mincore+564>: mov 0x4(%edi),%edx
...
// eax = PTE page table low address
0xc0507a8e <sys_mincore+574>: mov (%edi),%eax
[..]
Please note that the PMD is not read atomically. These are two "mov"
instructions where the high order bits of the PMD entry are fetched
first. Hence, the above machine code is prone to the following race.
- The PMD entry {high|low} is 0x0000000000000000.
The "mov" at 0xc0507a84 loads 0x00000000 into edx.
- A page fault (on another CPU) sneaks in between the two "mov"
instructions and instantiates the PMD.
- The PMD entry {high|low} is now 0x00000003fda38067.
The "mov" at 0xc0507a8e loads 0xfda38067 into eax.
----
Reported-by: Ulrich Obergfell <[email protected]>
Signed-off-by: Andrea Arcangeli <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Larry Woodman <[email protected]>
Cc: Petr Matousek <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]> | 26c191788f18129af0eb32a358cdaea0c7479626 | linux | bigvul | 1 | null | null | null |
hfsplus: Fix potential buffer overflows
Commit ec81aecb2966 ("hfs: fix a potential buffer overflow") fixed a few
potential buffer overflows in the hfs filesystem. But as Timo Warns
pointed out, these changes also need to be made on the hfsplus
filesystem as well.
Reported-by: Timo Warns <[email protected]>
Acked-by: WANG Cong <[email protected]>
Cc: Alexey Khoroshilov <[email protected]>
Cc: Miklos Szeredi <[email protected]>
Cc: Sage Weil <[email protected]>
Cc: Eugene Teo <[email protected]>
Cc: Roman Zippel <[email protected]>
Cc: Al Viro <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Cc: Alexey Dobriyan <[email protected]>
Cc: Dave Anderson <[email protected]>
Cc: stable <[email protected]>
Cc: Andrew Morton <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]> | 6f24f892871acc47b40dd594c63606a17c714f77 | linux | bigvul | 1 | null | null | null |
dl2k: Clean up rio_ioctl
The dl2k driver's rio_ioctl call has a few issues:
- No permissions checking
- Implements SIOCGMIIREG and SIOCGMIIREG using the SIOCDEVPRIVATE numbers
- Has a few ioctls that may have been used for debugging at one point
but have no place in the kernel proper.
This patch removes all but the MII ioctls, renumbers them to use the
standard ones, and adds the proper permission check for SIOCSMIIREG.
We can also get rid of the dl2k-specific struct mii_data in favor of
the generic struct mii_ioctl_data.
Since we have the phyid on hand, we can add the SIOCGMIIPHY ioctl too.
Most of the MII code for the driver could probably be converted to use
the generic MII library but I don't have a device to test the results.
Reported-by: Stephan Mueller <[email protected]>
Signed-off-by: Jeff Mahoney <[email protected]>
Signed-off-by: David S. Miller <[email protected]> | 1bb57e940e1958e40d51f2078f50c3a96a9b2d75 | linux | bigvul | 1 | null | null | null |
net: sock: validate data_len before allocating skb in sock_alloc_send_pskb()
We need to validate the number of pages consumed by data_len, otherwise frags
array could be overflowed by userspace. So this patch validate data_len and
return -EMSGSIZE when data_len may occupies more frags than MAX_SKB_FRAGS.
Signed-off-by: Jason Wang <[email protected]>
Signed-off-by: David S. Miller <[email protected]> | cc9b17ad29ecaa20bfe426a8d4dbfb94b13ff1cc | linux | bigvul | 1 | null | null | null |
hugepages: fix use after free bug in "quota" handling
hugetlbfs_{get,put}_quota() are badly named. They don't interact with the
general quota handling code, and they don't much resemble its behaviour.
Rather than being about maintaining limits on on-disk block usage by
particular users, they are instead about maintaining limits on in-memory
page usage (including anonymous MAP_PRIVATE copied-on-write pages)
associated with a particular hugetlbfs filesystem instance.
Worse, they work by having callbacks to the hugetlbfs filesystem code from
the low-level page handling code, in particular from free_huge_page().
This is a layering violation of itself, but more importantly, if the
kernel does a get_user_pages() on hugepages (which can happen from KVM
amongst others), then the free_huge_page() can be delayed until after the
associated inode has already been freed. If an unmount occurs at the
wrong time, even the hugetlbfs superblock where the "quota" limits are
stored may have been freed.
Andrew Barry proposed a patch to fix this by having hugepages, instead of
storing a pointer to their address_space and reaching the superblock from
there, had the hugepages store pointers directly to the superblock,
bumping the reference count as appropriate to avoid it being freed.
Andrew Morton rejected that version, however, on the grounds that it made
the existing layering violation worse.
This is a reworked version of Andrew's patch, which removes the extra, and
some of the existing, layering violation. It works by introducing the
concept of a hugepage "subpool" at the lower hugepage mm layer - that is a
finite logical pool of hugepages to allocate from. hugetlbfs now creates
a subpool for each filesystem instance with a page limit set, and a
pointer to the subpool gets added to each allocated hugepage, instead of
the address_space pointer used now. The subpool has its own lifetime and
is only freed once all pages in it _and_ all other references to it (i.e.
superblocks) are gone.
subpools are optional - a NULL subpool pointer is taken by the code to
mean that no subpool limits are in effect.
Previous discussion of this bug found in: "Fix refcounting in hugetlbfs
quota handling.". See: https://lkml.org/lkml/2011/8/11/28 or
http://marc.info/?l=linux-mm&m=126928970510627&w=1
v2: Fixed a bug spotted by Hillf Danton, and removed the extra parameter to
alloc_huge_page() - since it already takes the vma, it is not necessary.
Signed-off-by: Andrew Barry <[email protected]>
Signed-off-by: David Gibson <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Hillf Danton <[email protected]>
Cc: Paul Mackerras <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]> | 90481622d75715bfcb68501280a917dbfe516029 | linux | bigvul | 1 | null | null | null |
procfs: fix a vfsmount longterm reference leak
kern_mount() doesn't pair with plain mntput()...
Signed-off-by: Al Viro <[email protected]> | 905ad269c55fc62bee3da29f7b1d1efeba8aa1e1 | linux | bigvul | 1 | null | null | null |
fcaps: clear the same personality flags as suid when fcaps are used
If a process increases permissions using fcaps all of the dangerous
personality flags which are cleared for suid apps should also be cleared.
Thus programs given priviledge with fcaps will continue to have address space
randomization enabled even if the parent tried to disable it to make it
easier to attack.
Signed-off-by: Eric Paris <[email protected]>
Reviewed-by: Serge Hallyn <[email protected]>
Signed-off-by: James Morris <[email protected]> | d52fc5dde171f030170a6cb78034d166b13c9445 | linux | bigvul | 1 | null | null | null |
KVM: unmap pages from the iommu when slots are removed
commit 32f6daad4651a748a58a3ab6da0611862175722f upstream.
We've been adding new mappings, but not destroying old mappings.
This can lead to a page leak as pages are pinned using
get_user_pages, but only unpinned with put_page if they still
exist in the memslots list on vm shutdown. A memslot that is
destroyed while an iommu domain is enabled for the guest will
therefore result in an elevated page reference count that is
never cleared.
Additionally, without this fix, the iommu is only programmed
with the first translation for a gpa. This can result in
peer-to-peer errors if a mapping is destroyed and replaced by a
new mapping at the same gpa as the iommu will still be pointing
to the original, pinned memory address.
Signed-off-by: Alex Williamson <[email protected]>
Signed-off-by: Marcelo Tosatti <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]> | 09ca8e1173bcb12e2a449698c9ae3b86a8a10195 | linux | bigvul | 1 | null | null | null |
ext4: fix undefined behavior in ext4_fill_flex_info()
Commit 503358ae01b70ce6909d19dd01287093f6b6271c ("ext4: avoid divide by
zero when trying to mount a corrupted file system") fixes CVE-2009-4307
by performing a sanity check on s_log_groups_per_flex, since it can be
set to a bogus value by an attacker.
sbi->s_log_groups_per_flex = sbi->s_es->s_log_groups_per_flex;
groups_per_flex = 1 << sbi->s_log_groups_per_flex;
if (groups_per_flex < 2) { ... }
This patch fixes two potential issues in the previous commit.
1) The sanity check might only work on architectures like PowerPC.
On x86, 5 bits are used for the shifting amount. That means, given a
large s_log_groups_per_flex value like 36, groups_per_flex = 1 << 36
is essentially 1 << 4 = 16, rather than 0. This will bypass the check,
leaving s_log_groups_per_flex and groups_per_flex inconsistent.
2) The sanity check relies on undefined behavior, i.e., oversized shift.
A standard-confirming C compiler could rewrite the check in unexpected
ways. Consider the following equivalent form, assuming groups_per_flex
is unsigned for simplicity.
groups_per_flex = 1 << sbi->s_log_groups_per_flex;
if (groups_per_flex == 0 || groups_per_flex == 1) {
We compile the code snippet using Clang 3.0 and GCC 4.6. Clang will
completely optimize away the check groups_per_flex == 0, leaving the
patched code as vulnerable as the original. GCC keeps the check, but
there is no guarantee that future versions will do the same.
Signed-off-by: Xi Wang <[email protected]>
Signed-off-by: "Theodore Ts'o" <[email protected]>
Cc: [email protected] | d50f2ab6f050311dbf7b8f5501b25f0bf64a439b | linux | bigvul | 1 | null | null | null |
Fix Win32 RPC Crashes. | 8864019f6d88b13d3442843d9e6ebeb8dd938831 | bitcoin | bigvul | 1 | null | null | null |
Merge pull request #1 from nenolod/insp20
DNS resolver hardening (insp20 branch) | fe7dbd2c104c37f6f3af7d9f1646a3c332aea4a4 | inspircd | bigvul | 1 | null | null | null |
KVM: Ensure all vcpus are consistent with in-kernel irqchip settings
(cherry picked from commit 3e515705a1f46beb1c942bb8043c16f8ac7b1e9e)
If some vcpus are created before KVM_CREATE_IRQCHIP, then
irqchip_in_kernel() and vcpu->arch.apic will be inconsistent, leading
to potential NULL pointer dereferences.
Fix by:
- ensuring that no vcpus are installed when KVM_CREATE_IRQCHIP is called
- ensuring that a vcpu has an apic if it is installed after KVM_CREATE_IRQCHIP
This is somewhat long winded because vcpu->arch.apic is created without
kvm->lock held.
Based on earlier patch by Michael Ellerman.
Signed-off-by: Michael Ellerman <[email protected]>
Signed-off-by: Avi Kivity <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]> | 9c895160d25a76c21b65bad141b08e8d4f99afef | linux | bigvul | 1 | null | null | null |
Avoid uint overflow in case the length + index is over UINT_MAX | dcdf4fd954e3213c355746fa15b7480461972308 | taglib | bigvul | 1 | null | null | null |
[IPV6]: Fix slab corruption running ip6sic
From: Eric Sesterhenn <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: David S. Miller <[email protected]> | d0772b70faaf8e9f2013b6c4273d94d5eac8047a | linux | bigvul | 1 | null | null | null |
Fix bounds checks again. | 1aec04dbf8a24b8a6ba64c4f74efa0628e36db0b | file | bigvul | 1 | null | null | null |
mm: thp: fix pmd_bad() triggering in code paths holding mmap_sem read mode
commit 1a5a9906d4e8d1976b701f889d8f35d54b928f25 upstream.
In some cases it may happen that pmd_none_or_clear_bad() is called with
the mmap_sem hold in read mode. In those cases the huge page faults can
allocate hugepmds under pmd_none_or_clear_bad() and that can trigger a
false positive from pmd_bad() that will not like to see a pmd
materializing as trans huge.
It's not khugepaged causing the problem, khugepaged holds the mmap_sem
in write mode (and all those sites must hold the mmap_sem in read mode
to prevent pagetables to go away from under them, during code review it
seems vm86 mode on 32bit kernels requires that too unless it's
restricted to 1 thread per process or UP builds). The race is only with
the huge pagefaults that can convert a pmd_none() into a
pmd_trans_huge().
Effectively all these pmd_none_or_clear_bad() sites running with
mmap_sem in read mode are somewhat speculative with the page faults, and
the result is always undefined when they run simultaneously. This is
probably why it wasn't common to run into this. For example if the
madvise(MADV_DONTNEED) runs zap_page_range() shortly before the page
fault, the hugepage will not be zapped, if the page fault runs first it
will be zapped.
Altering pmd_bad() not to error out if it finds hugepmds won't be enough
to fix this, because zap_pmd_range would then proceed to call
zap_pte_range (which would be incorrect if the pmd become a
pmd_trans_huge()).
The simplest way to fix this is to read the pmd in the local stack
(regardless of what we read, no need of actual CPU barriers, only
compiler barrier needed), and be sure it is not changing under the code
that computes its value. Even if the real pmd is changing under the
value we hold on the stack, we don't care. If we actually end up in
zap_pte_range it means the pmd was not none already and it was not huge,
and it can't become huge from under us (khugepaged locking explained
above).
All we need is to enforce that there is no way anymore that in a code
path like below, pmd_trans_huge can be false, but pmd_none_or_clear_bad
can run into a hugepmd. The overhead of a barrier() is just a compiler
tweak and should not be measurable (I only added it for THP builds). I
don't exclude different compiler versions may have prevented the race
too by caching the value of *pmd on the stack (that hasn't been
verified, but it wouldn't be impossible considering
pmd_none_or_clear_bad, pmd_bad, pmd_trans_huge, pmd_none are all inlines
and there's no external function called in between pmd_trans_huge and
pmd_none_or_clear_bad).
if (pmd_trans_huge(*pmd)) {
if (next-addr != HPAGE_PMD_SIZE) {
VM_BUG_ON(!rwsem_is_locked(&tlb->mm->mmap_sem));
split_huge_page_pmd(vma->vm_mm, pmd);
} else if (zap_huge_pmd(tlb, vma, pmd, addr))
continue;
/* fall through */
}
if (pmd_none_or_clear_bad(pmd))
Because this race condition could be exercised without special
privileges this was reported in CVE-2012-1179.
The race was identified and fully explained by Ulrich who debugged it.
I'm quoting his accurate explanation below, for reference.
====== start quote =======
mapcount 0 page_mapcount 1
kernel BUG at mm/huge_memory.c:1384!
At some point prior to the panic, a "bad pmd ..." message similar to the
following is logged on the console:
mm/memory.c:145: bad pmd ffff8800376e1f98(80000000314000e7).
The "bad pmd ..." message is logged by pmd_clear_bad() before it clears
the page's PMD table entry.
143 void pmd_clear_bad(pmd_t *pmd)
144 {
-> 145 pmd_ERROR(*pmd);
146 pmd_clear(pmd);
147 }
After the PMD table entry has been cleared, there is an inconsistency
between the actual number of PMD table entries that are mapping the page
and the page's map count (_mapcount field in struct page). When the page
is subsequently reclaimed, __split_huge_page() detects this inconsistency.
1381 if (mapcount != page_mapcount(page))
1382 printk(KERN_ERR "mapcount %d page_mapcount %d\n",
1383 mapcount, page_mapcount(page));
-> 1384 BUG_ON(mapcount != page_mapcount(page));
The root cause of the problem is a race of two threads in a multithreaded
process. Thread B incurs a page fault on a virtual address that has never
been accessed (PMD entry is zero) while Thread A is executing an madvise()
system call on a virtual address within the same 2 MB (huge page) range.
virtual address space
.---------------------.
| |
| |
.-|---------------------|
| | |
| | |<-- B(fault)
| | |
2 MB | |/////////////////////|-.
huge < |/////////////////////| > A(range)
page | |/////////////////////|-'
| | |
| | |
'-|---------------------|
| |
| |
'---------------------'
- Thread A is executing an madvise(..., MADV_DONTNEED) system call
on the virtual address range "A(range)" shown in the picture.
sys_madvise
// Acquire the semaphore in shared mode.
down_read(¤t->mm->mmap_sem)
...
madvise_vma
switch (behavior)
case MADV_DONTNEED:
madvise_dontneed
zap_page_range
unmap_vmas
unmap_page_range
zap_pud_range
zap_pmd_range
//
// Assume that this huge page has never been accessed.
// I.e. content of the PMD entry is zero (not mapped).
//
if (pmd_trans_huge(*pmd)) {
// We don't get here due to the above assumption.
}
//
// Assume that Thread B incurred a page fault and
.---------> // sneaks in here as shown below.
| //
| if (pmd_none_or_clear_bad(pmd))
| {
| if (unlikely(pmd_bad(*pmd)))
| pmd_clear_bad
| {
| pmd_ERROR
| // Log "bad pmd ..." message here.
| pmd_clear
| // Clear the page's PMD entry.
| // Thread B incremented the map count
| // in page_add_new_anon_rmap(), but
| // now the page is no longer mapped
| // by a PMD entry (-> inconsistency).
| }
| }
|
v
- Thread B is handling a page fault on virtual address "B(fault)" shown
in the picture.
...
do_page_fault
__do_page_fault
// Acquire the semaphore in shared mode.
down_read_trylock(&mm->mmap_sem)
...
handle_mm_fault
if (pmd_none(*pmd) && transparent_hugepage_enabled(vma))
// We get here due to the above assumption (PMD entry is zero).
do_huge_pmd_anonymous_page
alloc_hugepage_vma
// Allocate a new transparent huge page here.
...
__do_huge_pmd_anonymous_page
...
spin_lock(&mm->page_table_lock)
...
page_add_new_anon_rmap
// Here we increment the page's map count (starts at -1).
atomic_set(&page->_mapcount, 0)
set_pmd_at
// Here we set the page's PMD entry which will be cleared
// when Thread A calls pmd_clear_bad().
...
spin_unlock(&mm->page_table_lock)
The mmap_sem does not prevent the race because both threads are acquiring
it in shared mode (down_read). Thread B holds the page_table_lock while
the page's map count and PMD table entry are updated. However, Thread A
does not synchronize on that lock.
====== end quote =======
[[email protected]: checkpatch fixes]
Reported-by: Ulrich Obergfell <[email protected]>
Signed-off-by: Andrea Arcangeli <[email protected]>
Acked-by: Johannes Weiner <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Dave Jones <[email protected]>
Acked-by: Larry Woodman <[email protected]>
Acked-by: Rik van Riel <[email protected]>
Cc: Mark Salter <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]> | 4a1d704194a441bf83c636004a479e01360ec850 | linux | bigvul | 1 | null | null | null |
refactor pyfribidi.c module
pyfribidi.c is now compiled as _pyfribidi. This module only handles
unicode internally and doesn't use the fribidi_utf8_to_unicode
function (which can't handle 4 byte utf-8 sequences). This fixes the
buffer overflow in issue #2.
The code is now also much simpler: pyfribidi.c is down from 280 to 130
lines of code.
We now ship a pure python pyfribidi that handles the case when
non-unicode strings are passed in.
We now also adapt the size of the output string if clean=True is
passed. | d2860c655357975e7b32d84e6b45e98f0dcecd7a | pyfribidi | bigvul | 1 | null | null | null |
mm: memcg: Correct unregistring of events attached to the same eventfd
There is an issue when memcg unregisters events that were attached to
the same eventfd:
- On the first call mem_cgroup_usage_unregister_event() removes all
events attached to a given eventfd, and if there were no events left,
thresholds->primary would become NULL;
- Since there were several events registered, cgroups core will call
mem_cgroup_usage_unregister_event() again, but now kernel will oops,
as the function doesn't expect that threshold->primary may be NULL.
That's a good question whether mem_cgroup_usage_unregister_event()
should actually remove all events in one go, but nowadays it can't
do any better as cftype->unregister_event callback doesn't pass
any private event-associated cookie. So, let's fix the issue by
simply checking for threshold->primary.
FWIW, w/o the patch the following oops may be observed:
BUG: unable to handle kernel NULL pointer dereference at 0000000000000004
IP: [<ffffffff810be32c>] mem_cgroup_usage_unregister_event+0x9c/0x1f0
Pid: 574, comm: kworker/0:2 Not tainted 3.3.0-rc4+ #9 Bochs Bochs
RIP: 0010:[<ffffffff810be32c>] [<ffffffff810be32c>] mem_cgroup_usage_unregister_event+0x9c/0x1f0
RSP: 0018:ffff88001d0b9d60 EFLAGS: 00010246
Process kworker/0:2 (pid: 574, threadinfo ffff88001d0b8000, task ffff88001de91cc0)
Call Trace:
[<ffffffff8107092b>] cgroup_event_remove+0x2b/0x60
[<ffffffff8103db94>] process_one_work+0x174/0x450
[<ffffffff8103e413>] worker_thread+0x123/0x2d0
Cc: stable <[email protected]>
Signed-off-by: Anton Vorontsov <[email protected]>
Acked-by: KAMEZAWA Hiroyuki <[email protected]>
Cc: Kirill A. Shutemov <[email protected]>
Cc: Michal Hocko <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]> | 371528caec553785c37f73fa3926ea0de84f986f | linux | bigvul | 1 | null | null | null |
Be more careful when parsing Vorbis Comments | b3646a07348ffa276ea41a9dae03ddc63ea6c532 | taglib | bigvul | 1 | null | null | null |
Make sure to not try dividing by zero | 77d61c6eca4d08b9b025738acf6b926cc750db23 | taglib | bigvul | 1 | null | null | null |
regset: Prevent null pointer reference on readonly regsets
The regset common infrastructure assumed that regsets would always
have .get and .set methods, but not necessarily .active methods.
Unfortunately people have since written regsets without .set methods.
Rather than putting in stub functions everywhere, handle regsets with
null .get or .set methods explicitly.
Signed-off-by: H. Peter Anvin <[email protected]>
Reviewed-by: Oleg Nesterov <[email protected]>
Acked-by: Roland McGrath <[email protected]>
Cc: <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]> | c8e252586f8d5de906385d8cf6385fee289a825e | linux | bigvul | 1 | null | null | null |
cifs: fix dentry refcount leak when opening a FIFO on lookup
commit 5bccda0ebc7c0331b81ac47d39e4b920b198b2cd upstream.
The cifs code will attempt to open files on lookup under certain
circumstances. What happens though if we find that the file we opened
was actually a FIFO or other special file?
Currently, the open filehandle just ends up being leaked leading to
a dentry refcount mismatch and oops on umount. Fix this by having the
code close the filehandle on the server if it turns out not to be a
regular file. While we're at it, change this spaghetti if statement
into a switch too.
Reported-by: CAI Qian <[email protected]>
Tested-by: CAI Qian <[email protected]>
Reviewed-by: Shirish Pargaonkar <[email protected]>
Signed-off-by: Jeff Layton <[email protected]>
Signed-off-by: Steve French <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]> | 88d7d4e4a439f32acc56a6d860e415ee71d3df08 | linux | bigvul | 1 | null | null | null |
Null pointer deref in kadmind [CVE-2012-1013]
The fix for #6626 could cause kadmind to dereference a null pointer if
a create-principal request contains no password but does contain the
KRB5_KDB_DISALLOW_ALL_TIX flag (e.g. "addprinc -randkey -allow_tix
name"). Only clients authorized to create principals can trigger the
bug. Fix the bug by testing for a null password in check_1_6_dummy.
CVSSv2 vector: AV:N/AC:M/Au:S/C:N/I:N/A:P/E:H/RL:O/RC:C
[[email protected]: Minor style change and commit message]
ticket: 7152
target_version: 1.10.2
tags: pullup | c5be6209311d4a8f10fda37d0d3f876c1b33b77b | krb5 | bigvul | 1 | null | null | null |
kernel/sys.c: fix stack memory content leak via UNAME26
Calling uname() with the UNAME26 personality set allows a leak of kernel
stack contents. This fixes it by defensively calculating the length of
copy_to_user() call, making the len argument unsigned, and initializing
the stack buffer to zero (now technically unneeded, but hey, overkill).
CVE-2012-0957
Reported-by: PaX Team <[email protected]>
Signed-off-by: Kees Cook <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: PaX Team <[email protected]>
Cc: Brad Spengler <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]> | 2702b1526c7278c4d65d78de209a465d4de2885e | linux | bigvul | 1 | null | null | null |
block: Fix io_context leak after clone with CLONE_IO
With CLONE_IO, copy_io() increments both ioc->refcount and ioc->nr_tasks.
However exit_io_context() only decrements ioc->refcount if ioc->nr_tasks
reaches 0.
Always call put_io_context() in exit_io_context().
Signed-off-by: Louis Rilling <[email protected]>
Signed-off-by: Jens Axboe <[email protected]> | 61cc74fbb87af6aa551a06a370590c9bc07e29d9 | linux | bigvul | 1 | null | null | null |
Explicitly remove file permissions for settings and DB | 5632c35d6759f5e13a7dfe78e4ee6403ff6a8e3e | mumble | bigvul | 1 | null | null | null |
Fixed stack based buffer overflow in transparent cookie encryption (see separate advisory) | 73b1968ee30f6d9d2dae497544b910e68e114bfa | suhosin | bigvul | 1 | null | null | null |
igmp: Avoid zero delay when receiving odd mixture of IGMP queries
Commit 5b7c84066733c5dfb0e4016d939757b38de189e4 ('ipv4: correct IGMP
behavior on v3 query during v2-compatibility mode') added yet another
case for query parsing, which can result in max_delay = 0. Substitute
a value of 1, as in the usual v3 case.
Reported-by: Simon McVittie <[email protected]>
References: http://bugs.debian.org/654876
Signed-off-by: Ben Hutchings <[email protected]>
Signed-off-by: David S. Miller <[email protected]> | a8c1f65c79cbbb2f7da782d4c9d15639a9b94b27 | linux | bigvul | 1 | null | null | null |
Unused iocbs in a batch should not be accounted as active.
commit 69e4747ee9727d660b88d7e1efe0f4afcb35db1b upstream.
Since commit 080d676de095 ("aio: allocate kiocbs in batches") iocbs are
allocated in a batch during processing of first iocbs. All iocbs in a
batch are automatically added to ctx->active_reqs list and accounted in
ctx->reqs_active.
If one (not the last one) of iocbs submitted by an user fails, further
iocbs are not processed, but they are still present in ctx->active_reqs
and accounted in ctx->reqs_active. This causes process to stuck in a D
state in wait_for_all_aios() on exit since ctx->reqs_active will never
go down to zero. Furthermore since kiocb_batch_free() frees iocb
without removing it from active_reqs list the list become corrupted
which may cause oops.
Fix this by removing iocb from ctx->active_reqs and updating
ctx->reqs_active in kiocb_batch_free().
Signed-off-by: Gleb Natapov <[email protected]>
Reviewed-by: Jeff Moyer <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]> | 802f43594d6e4d2ac61086d239153c17873a0428 | linux | bigvul | 1 | null | null | null |
KVM: x86: fix missing checks in syscall emulation
On hosts without this patch, 32bit guests will crash (and 64bit guests
may behave in a wrong way) for example by simply executing following
nasm-demo-application:
[bits 32]
global _start
SECTION .text
_start: syscall
(I tested it with winxp and linux - both always crashed)
Disassembly of section .text:
00000000 <_start>:
0: 0f 05 syscall
The reason seems a missing "invalid opcode"-trap (int6) for the
syscall opcode "0f05", which is not available on Intel CPUs
within non-longmodes, as also on some AMD CPUs within legacy-mode.
(depending on CPU vendor, MSR_EFER and cpuid)
Because previous mentioned OSs may not engage corresponding
syscall target-registers (STAR, LSTAR, CSTAR), they remain
NULL and (non trapping) syscalls are leading to multiple
faults and finally crashs.
Depending on the architecture (AMD or Intel) pretended by
guests, various checks according to vendor's documentation
are implemented to overcome the current issue and behave
like the CPUs physical counterparts.
[mtosatti: cleanup/beautify code]
Signed-off-by: Stephan Baerwolf <[email protected]>
Signed-off-by: Marcelo Tosatti <[email protected]> | c2226fc9e87ba3da060e47333657cd6616652b84 | linux | bigvul | 1 | null | null | null |
drm: integer overflow in drm_mode_dirtyfb_ioctl()
There is a potential integer overflow in drm_mode_dirtyfb_ioctl()
if userspace passes in a large num_clips. The call to kmalloc would
allocate a small buffer, and the call to fb->funcs->dirty may result
in a memory corruption.
Reported-by: Haogang Chen <[email protected]>
Signed-off-by: Xi Wang <[email protected]>
Cc: [email protected]
Signed-off-by: Dave Airlie <[email protected]> | a5cd335165e31db9dbab636fd29895d41da55dd2 | linux | bigvul | 1 | null | null | null |
xfs: validate acl count
This prevents in-memory corruption and possible panics if the on-disk
ACL is badly corrupted.
Signed-off-by: Christoph Hellwig <[email protected]>
Signed-off-by: Ben Myers <[email protected]> | fa8b18edd752a8b4e9d1ee2cd615b82c93cf8bba | linux | bigvul | 1 | null | null | null |
CVE-2012-0037
Enforce entity loading policy in raptor_libxml_resolveEntity
and raptor_libxml_getEntity by checking for file URIs and network URIs.
Add RAPTOR_OPTION_LOAD_EXTERNAL_ENTITIES / loadExternalEntities for
turning on loading of XML external entity loading, disabled by default.
This affects all the parsers that use SAX2: rdfxml, rss-tag-soup (and
aliases) and rdfa. | a676f235309a59d4aa78eeffd2574ae5d341fcb0 | raptor | bigvul | 1 | null | null | null |
URL sanitize: reject URLs containing bad data
Protocols (IMAP, POP3 and SMTP) that use the path part of a URL in a
decoded manner now use the new Curl_urldecode() function to reject URLs
with embedded control codes (anything that is or decodes to a byte value
less than 32).
URLs containing such codes could easily otherwise be used to do harm and
allow users to do unintended actions with otherwise innocent tools and
applications. Like for example using a URL like
pop3://pop3.example.com/1%0d%0aDELE%201 when the app wants a URL to get
a mail and instead this would delete one.
This flaw is considered a security vulnerability: CVE-2012-0036
Security advisory at: http://curl.haxx.se/docs/adv_20120124.html
Reported by: Dan Fandrich | 75ca568fa1c19de4c5358fed246686de8467c238 | curl | bigvul | 1 | null | null | null |
Move "exit_robust_list" into mm_release()
We don't want to get rid of the futexes just at exit() time, we want to
drop them when doing an execve() too, since that gets rid of the
previous VM image too.
Doing it at mm_release() time means that we automatically always do it
when we disassociate a VM map from the task.
Reported-by: [email protected]
Cc: Andrew Morton <[email protected]>
Cc: Nick Piggin <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Brad Spengler <[email protected]>
Cc: Alex Efros <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Oleg Nesterov <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]> | 8141c7f3e7aee618312fa1c15109e1219de784a7 | linux | bigvul | 1 | null | null | null |
rose: Add length checks to CALL_REQUEST parsing
Define some constant offsets for CALL_REQUEST based on the description
at <http://www.techfest.com/networking/wan/x25plp.htm> and the
definition of ROSE as using 10-digit (5-byte) addresses. Use them
consistently. Validate all implicit and explicit facilities lengths.
Validate the address length byte rather than either trusting or
assuming its value.
Signed-off-by: Ben Hutchings <[email protected]>
Signed-off-by: David S. Miller <[email protected]> | e0bccd315db0c2f919e7fcf9cb60db21d9986f52 | linux | bigvul | 1 | null | null | null |
ROSE: prevent heap corruption with bad facilities
When parsing the FAC_NATIONAL_DIGIS facilities field, it's possible for
a remote host to provide more digipeaters than expected, resulting in
heap corruption. Check against ROSE_MAX_DIGIS to prevent overflows, and
abort facilities parsing on failure.
Additionally, when parsing the FAC_CCITT_DEST_NSAP and
FAC_CCITT_SRC_NSAP facilities fields, a remote host can provide a length
of less than 10, resulting in an underflow in a memcpy size, causing a
kernel panic due to massive heap corruption. A length of greater than
20 results in a stack overflow of the callsign array. Abort facilities
parsing on these invalid length values.
Signed-off-by: Dan Rosenberg <[email protected]>
Cc: [email protected]
Signed-off-by: David S. Miller <[email protected]> | be20250c13f88375345ad99950190685eda51eb8 | linux | bigvul | 1 | null | null | null |
Sched: fix skip_clock_update optimization
idle_balance() drops/retakes rq->lock, leaving the previous task
vulnerable to set_tsk_need_resched(). Clear it after we return
from balancing instead, and in setup_thread_stack() as well, so
no successfully descheduled or never scheduled task has it set.
Need resched confused the skip_clock_update logic, which assumes
that the next call to update_rq_clock() will come nearly immediately
after being set. Make the optimization robust against the waking
a sleeper before it sucessfully deschedules case by checking that
the current task has not been dequeued before setting the flag,
since it is that useless clock update we're trying to save, and
clear unconditionally in schedule() proper instead of conditionally
in put_prev_task().
Signed-off-by: Mike Galbraith <[email protected]>
Reported-by: Bjoern B. Brandenburg <[email protected]>
Tested-by: Yong Zhang <[email protected]>
Signed-off-by: Peter Zijlstra <[email protected]>
Cc: [email protected]
LKML-Reference: <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]> | f26f9aff6aaf67e9a430d16c266f91b13a5bff64 | linux | bigvul | 1 | null | null | null |
perf, powerpc: Handle events that raise an exception without overflowing
Events on POWER7 can roll back if a speculative event doesn't
eventually complete. Unfortunately in some rare cases they will
raise a performance monitor exception. We need to catch this to
ensure we reset the PMC. In all cases the PMC will be 256 or less
cycles from overflow.
Signed-off-by: Anton Blanchard <[email protected]>
Signed-off-by: Peter Zijlstra <[email protected]>
Cc: <[email protected]> # as far back as it applies cleanly
LKML-Reference: <20110309143842.6c22845e@kryten>
Signed-off-by: Ingo Molnar <[email protected]> | 0837e3242c73566fc1c0196b4ec61779c25ffc93 | linux | bigvul | 1 | null | null | null |
sendmmsg/sendmsg: fix unsafe user pointer access
Dereferencing a user pointer directly from kernel-space without going
through the copy_from_user family of functions is a bad idea. Two of
such usages can be found in the sendmsg code path called from sendmmsg,
added by
commit c71d8ebe7a4496fb7231151cb70a6baa0cb56f9a upstream.
commit 5b47b8038f183b44d2d8ff1c7d11a5c1be706b34 in the 3.0-stable tree.
Usages are performed through memcmp() and memcpy() directly. Fix those
by using the already copied msg_sys structure instead of the __user *msg
structure. Note that msg_sys can be set to NULL by verify_compat_iovec()
or verify_iovec(), which requires additional NULL pointer checks.
Signed-off-by: Mathieu Desnoyers <[email protected]>
Signed-off-by: David Goulet <[email protected]>
CC: Tetsuo Handa <[email protected]>
CC: Anton Blanchard <[email protected]>
CC: David S. Miller <[email protected]>
CC: stable <[email protected]>
Signed-off-by: David S. Miller <[email protected]> | bc909d9ddbf7778371e36a651d6e4194b1cc7d4c | linux | bigvul | 1 | null | null | null |
ipv6: udp: fix the wrong headroom check
At this point, skb->data points to skb_transport_header.
So, headroom check is wrong.
For some case:bridge(UFO is on) + eth device(UFO is off),
there is no enough headroom for IPv6 frag head.
But headroom check is always false.
This will bring about data be moved to there prior to skb->head,
when adding IPv6 frag header to skb.
Signed-off-by: Shan Wei <[email protected]>
Acked-by: Herbert Xu <[email protected]>
Signed-off-by: David S. Miller <[email protected]> | a9cf73ea7ff78f52662c8658d93c226effbbedde | linux | bigvul | 1 | null | null | null |
NFSv4: Convert the open and close ops to use fmode
Signed-off-by: Trond Myklebust <[email protected]> | dc0b027dfadfcb8a5504f7d8052754bf8d501ab9 | linux | bigvul | 1 | null | null | null |
NFSv4: include bitmap in nfsv4 get acl data
The NFSv4 bitmap size is unbounded: a server can return an arbitrary
sized bitmap in an FATTR4_WORD0_ACL request. Replace using the
nfs4_fattr_bitmap_maxsz as a guess to the maximum bitmask returned by a server
with the inclusion of the bitmap (xdr length plus bitmasks) and the acl data
xdr length to the (cached) acl page data.
This is a general solution to commit e5012d1f "NFSv4.1: update
nfs4_fattr_bitmap_maxsz" and fixes hitting a BUG_ON in xdr_shrink_bufhead
when getting ACLs.
Fix a bug in decode_getacl that returned -EINVAL on ACLs > page when getxattr
was called with a NULL buffer, preventing ACL > PAGE_SIZE from being retrieved.
Cc: [email protected]
Signed-off-by: Andy Adamson <[email protected]>
Signed-off-by: Trond Myklebust <[email protected]> | bf118a342f10dafe44b14451a1392c3254629a1f | linux | bigvul | 1 | null | null | null |
dm: do not forward ioctls from logical volumes to the underlying device
A logical volume can map to just part of underlying physical volume.
In this case, it must be treated like a partition.
Based on a patch from Alasdair G Kergon.
Cc: Alasdair G Kergon <[email protected]>
Cc: [email protected]
Signed-off-by: Paolo Bonzini <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]> | ec8013beddd717d1740cfefb1a9b900deef85462 | linux | bigvul | 1 | null | null | null |
net: Audit drivers to identify those needing IFF_TX_SKB_SHARING cleared
After the last patch, We are left in a state in which only drivers calling
ether_setup have IFF_TX_SKB_SHARING set (we assume that drivers touching real
hardware call ether_setup for their net_devices and don't hold any state in
their skbs. There are a handful of drivers that violate this assumption of
course, and need to be fixed up. This patch identifies those drivers, and marks
them as not being able to support the safe transmission of skbs by clearning the
IFF_TX_SKB_SHARING flag in priv_flags
Signed-off-by: Neil Horman <[email protected]>
CC: Karsten Keil <[email protected]>
CC: "David S. Miller" <[email protected]>
CC: Jay Vosburgh <[email protected]>
CC: Andy Gospodarek <[email protected]>
CC: Patrick McHardy <[email protected]>
CC: Krzysztof Halasa <[email protected]>
CC: "John W. Linville" <[email protected]>
CC: Greg Kroah-Hartman <[email protected]>
CC: Marcel Holtmann <[email protected]>
CC: Johannes Berg <[email protected]>
Signed-off-by: David S. Miller <[email protected]> | 550fd08c2cebad61c548def135f67aba284c6162 | linux | bigvul | 1 | null | null | null |
oom: fix integer overflow of points in oom_badness
commit ff05b6f7ae762b6eb464183eec994b28ea09f6dd upstream.
An integer overflow will happen on 64bit archs if task's sum of rss,
swapents and nr_ptes exceeds (2^31)/1000 value. This was introduced by
commit
f755a04 oom: use pte pages in OOM score
where the oom score computation was divided into several steps and it's no
longer computed as one expression in unsigned long(rss, swapents, nr_pte
are unsigned long), where the result value assigned to points(int) is in
range(1..1000). So there could be an int overflow while computing
176 points *= 1000;
and points may have negative value. Meaning the oom score for a mem hog task
will be one.
196 if (points <= 0)
197 return 1;
For example:
[ 3366] 0 3366 35390480 24303939 5 0 0 oom01
Out of memory: Kill process 3366 (oom01) score 1 or sacrifice child
Here the oom1 process consumes more than 24303939(rss)*4096~=92GB physical
memory, but it's oom score is one.
In this situation the mem hog task is skipped and oom killer kills another and
most probably innocent task with oom score greater than one.
The points variable should be of type long instead of int to prevent the
int overflow.
Signed-off-by: Frantisek Hrbata <[email protected]>
Acked-by: KOSAKI Motohiro <[email protected]>
Acked-by: Oleg Nesterov <[email protected]>
Acked-by: David Rientjes <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]> | 56c6a8a4aadca809e04276eabe5552935c51387f | linux | bigvul | 1 | null | null | null |
jbd2: clear BH_Delay & BH_Unwritten in journal_unmap_buffer
journal_unmap_buffer()'s zap_buffer: code clears a lot of buffer head
state ala discard_buffer(), but does not touch _Delay or _Unwritten as
discard_buffer() does.
This can be problematic in some areas of the ext4 code which assume
that if they have found a buffer marked unwritten or delay, then it's
a live one. Perhaps those spots should check whether it is mapped
as well, but if jbd2 is going to tear down a buffer, let's really
tear it down completely.
Without this I get some fsx failures on sub-page-block filesystems
up until v3.2, at which point 4e96b2dbbf1d7e81f22047a50f862555a6cb87cb
and 189e868fa8fdca702eb9db9d8afc46b5cb9144c9 make the failures go
away, because buried within that large change is some more flag
clearing. I still think it's worth doing in jbd2, since
->invalidatepage leads here directly, and it's the right place
to clear away these flags.
Signed-off-by: Eric Sandeen <[email protected]>
Signed-off-by: "Theodore Ts'o" <[email protected]>
Cc: [email protected] | 15291164b22a357cb211b618adfef4fa82fc0de3 | linux | bigvul | 1 | null | null | null |
crypto: ghash - Avoid null pointer dereference if no key is set
The ghash_update function passes a pointer to gf128mul_4k_lle which will
be NULL if ghash_setkey is not called or if the most recent call to
ghash_setkey failed to allocate memory. This causes an oops. Fix this
up by returning an error code in the null case.
This is trivially triggered from unprivileged userspace through the
AF_ALG interface by simply writing to the socket without setting a key.
The ghash_final function has a similar issue, but triggering it requires
a memory allocation failure in ghash_setkey _after_ at least one
successful call to ghash_update.
BUG: unable to handle kernel NULL pointer dereference at 00000670
IP: [<d88c92d4>] gf128mul_4k_lle+0x23/0x60 [gf128mul]
*pde = 00000000
Oops: 0000 [#1] PREEMPT SMP
Modules linked in: ghash_generic gf128mul algif_hash af_alg nfs lockd nfs_acl sunrpc bridge ipv6 stp llc
Pid: 1502, comm: hashatron Tainted: G W 3.1.0-rc9-00085-ge9308cf #32 Bochs Bochs
EIP: 0060:[<d88c92d4>] EFLAGS: 00000202 CPU: 0
EIP is at gf128mul_4k_lle+0x23/0x60 [gf128mul]
EAX: d69db1f0 EBX: d6b8ddac ECX: 00000004 EDX: 00000000
ESI: 00000670 EDI: d6b8ddac EBP: d6b8ddc8 ESP: d6b8dda4
DS: 007b ES: 007b FS: 00d8 GS: 0033 SS: 0068
Process hashatron (pid: 1502, ti=d6b8c000 task=d6810000 task.ti=d6b8c000)
Stack:
00000000 d69db1f0 00000163 00000000 d6b8ddc8 c101a520 d69db1f0 d52aa000
00000ff0 d6b8dde8 d88d310f d6b8a3f8 d52aa000 00001000 d88d502c d6b8ddfc
00001000 d6b8ddf4 c11676ed d69db1e8 d6b8de24 c11679ad d52aa000 00000000
Call Trace:
[<c101a520>] ? kmap_atomic_prot+0x37/0xa6
[<d88d310f>] ghash_update+0x85/0xbe [ghash_generic]
[<c11676ed>] crypto_shash_update+0x18/0x1b
[<c11679ad>] shash_ahash_update+0x22/0x36
[<c11679cc>] shash_async_update+0xb/0xd
[<d88ce0ba>] hash_sendpage+0xba/0xf2 [algif_hash]
[<c121b24c>] kernel_sendpage+0x39/0x4e
[<d88ce000>] ? 0xd88cdfff
[<c121b298>] sock_sendpage+0x37/0x3e
[<c121b261>] ? kernel_sendpage+0x4e/0x4e
[<c10b4dbc>] pipe_to_sendpage+0x56/0x61
[<c10b4e1f>] splice_from_pipe_feed+0x58/0xcd
[<c10b4d66>] ? splice_from_pipe_begin+0x10/0x10
[<c10b51f5>] __splice_from_pipe+0x36/0x55
[<c10b4d66>] ? splice_from_pipe_begin+0x10/0x10
[<c10b6383>] splice_from_pipe+0x51/0x64
[<c10b63c2>] ? default_file_splice_write+0x2c/0x2c
[<c10b63d5>] generic_splice_sendpage+0x13/0x15
[<c10b4d66>] ? splice_from_pipe_begin+0x10/0x10
[<c10b527f>] do_splice_from+0x5d/0x67
[<c10b6865>] sys_splice+0x2bf/0x363
[<c129373b>] ? sysenter_exit+0xf/0x16
[<c104dc1e>] ? trace_hardirqs_on_caller+0x10e/0x13f
[<c129370c>] sysenter_do_call+0x12/0x32
Code: 83 c4 0c 5b 5e 5f c9 c3 55 b9 04 00 00 00 89 e5 57 8d 7d e4 56 53 8d 5d e4 83 ec 18 89 45 e0 89 55 dc 0f b6 70 0f c1 e6 04 01 d6 <f3> a5 be 0f 00 00 00 4e 89 d8 e8 48 ff ff ff 8b 45 e0 89 da 0f
EIP: [<d88c92d4>] gf128mul_4k_lle+0x23/0x60 [gf128mul] SS:ESP 0068:d6b8dda4
CR2: 0000000000000670
---[ end trace 4eaa2a86a8e2da24 ]---
note: hashatron[1502] exited with preempt_count 1
BUG: scheduling while atomic: hashatron/1502/0x10000002
INFO: lockdep is turned off.
[...]
Signed-off-by: Nick Bowler <[email protected]>
Cc: [email protected] [2.6.37+]
Signed-off-by: Herbert Xu <[email protected]> | 7ed47b7d142ec99ad6880bbbec51e9f12b3af74c | linux | bigvul | 1 | null | null | null |
sysctl: restrict write access to dmesg_restrict
When dmesg_restrict is set to 1 CAP_SYS_ADMIN is needed to read the kernel
ring buffer. But a root user without CAP_SYS_ADMIN is able to reset
dmesg_restrict to 0.
This is an issue when e.g. LXC (Linux Containers) are used and complete
user space is running without CAP_SYS_ADMIN. A unprivileged and jailed
root user can bypass the dmesg_restrict protection.
With this patch writing to dmesg_restrict is only allowed when root has
CAP_SYS_ADMIN.
Signed-off-by: Richard Weinberger <[email protected]>
Acked-by: Dan Rosenberg <[email protected]>
Acked-by: Serge E. Hallyn <[email protected]>
Cc: Eric Paris <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: James Morris <[email protected]>
Cc: Eugene Teo <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]> | bfdc0b497faa82a0ba2f9dddcf109231dd519fcc | linux | bigvul | 1 | null | null | null |
proc: fix oops on invalid /proc/<pid>/maps access
When m_start returns an error, the seq_file logic will still call m_stop
with that error entry, so we'd better make sure that we check it before
using it as a vma.
Introduced by commit ec6fd8a4355c ("report errors in /proc/*/*map*
sanely"), which replaced NULL with various ERR_PTR() cases.
(On ia64, you happen to get a unaligned fault instead of a page fault,
since the address used is generally some random error code like -EPERM)
Reported-by: Anca Emanuel <[email protected]>
Reported-by: Tony Luck <[email protected]>
Cc: Al Viro <[email protected]>
Cc: Américo Wang <[email protected]>
Cc: Stephen Wilson <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]> | 76597cd31470fa130784c78fadb4dab2e624a723 | linux | bigvul | 1 | null | null | null |
cifs: always do is_path_accessible check in cifs_mount
Currently, we skip doing the is_path_accessible check in cifs_mount if
there is no prefixpath. I have a report of at least one server however
that allows a TREE_CONNECT to a share that has a DFS referral at its
root. The reporter in this case was using a UNC that had no prefixpath,
so the is_path_accessible check was not triggered and the box later hit
a BUG() because we were chasing a DFS referral on the root dentry for
the mount.
This patch fixes this by removing the check for a zero-length
prefixpath. That should make the is_path_accessible check be done in
this situation and should allow the client to chase the DFS referral at
mount time instead.
Cc: [email protected]
Reported-and-Tested-by: Yogesh Sharma <[email protected]>
Signed-off-by: Jeff Layton <[email protected]>
Signed-off-by: Steve French <[email protected]> | 70945643722ffeac779d2529a348f99567fa5c33 | linux | bigvul | 1 | null | null | null |
b43: allocate receive buffers big enough for max frame len + offset
Otherwise, skb_put inside of dma_rx can fail...
https://bugzilla.kernel.org/show_bug.cgi?id=32042
Signed-off-by: John W. Linville <[email protected]>
Acked-by: Larry Finger <[email protected]>
Cc: [email protected] | c85ce65ecac078ab1a1835c87c4a6319cf74660a | linux | bigvul | 1 | null | null | null |
fuse: check size of FUSE_NOTIFY_INVAL_ENTRY message
FUSE_NOTIFY_INVAL_ENTRY didn't check the length of the write so the
message processing could overrun and result in a "kernel BUG at
fs/fuse/dev.c:629!"
Reported-by: Han-Wen Nienhuys <[email protected]>
Signed-off-by: Miklos Szeredi <[email protected]>
CC: [email protected] | c2183d1e9b3f313dd8ba2b1b0197c8d9fb86a7ae | linux | bigvul | 1 | null | null | null |
remove div_long_long_rem
x86 is the only arch right now, which provides an optimized for
div_long_long_rem and it has the downside that one has to be very careful that
the divide doesn't overflow.
The API is a little akward, as the arguments for the unsigned divide are
signed. The signed version also doesn't handle a negative divisor and
produces worse code on 64bit archs.
There is little incentive to keep this API alive, so this converts the few
users to the new API.
Signed-off-by: Roman Zippel <[email protected]>
Cc: Ralf Baechle <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: john stultz <[email protected]>
Cc: Christoph Lameter <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]> | f8bd2258e2d520dff28c855658bd24bdafb5102d | linux | bigvul | 1 | null | null | null |
cifs: fix possible memory corruption in CIFSFindNext
The name_len variable in CIFSFindNext is a signed int that gets set to
the resume_name_len in the cifs_search_info. The resume_name_len however
is unsigned and for some infolevels is populated directly from a 32 bit
value sent by the server.
If the server sends a very large value for this, then that value could
look negative when converted to a signed int. That would make that
value pass the PATH_MAX check later in CIFSFindNext. The name_len would
then be used as a length value for a memcpy. It would then be treated
as unsigned again, and the memcpy scribbles over a ton of memory.
Fix this by making the name_len an unsigned value in CIFSFindNext.
Cc: <[email protected]>
Reported-by: Darren Lavender <[email protected]>
Signed-off-by: Jeff Layton <[email protected]>
Signed-off-by: Steve French <[email protected]> | 9438fabb73eb48055b58b89fc51e0bc4db22fabd | linux | bigvul | 1 | null | null | null |
net: Compute protocol sequence numbers and fragment IDs using MD5.
Computers have become a lot faster since we compromised on the
partial MD4 hash which we use currently for performance reasons.
MD5 is a much safer choice, and is inline with both RFC1948 and
other ISS generators (OpenBSD, Solaris, etc.)
Furthermore, only having 24-bits of the sequence number be truly
unpredictable is a very serious limitation. So the periodic
regeneration and 8-bit counter have been removed. We compute and
use a full 32-bit sequence number.
For ipv6, DCCP was found to use a 32-bit truncated initial sequence
number (it needs 43-bits) and that is fixed here as well.
Reported-by: Dan Kaminsky <[email protected]>
Tested-by: Willy Tarreau <[email protected]>
Signed-off-by: David S. Miller <[email protected]> | 6e5714eaf77d79ae1c8b47e3e040ff5411b717ec | linux | bigvul | 1 | null | null | null |
perf: Remove the nmi parameter from the swevent and overflow interface
The nmi parameter indicated if we could do wakeups from the current
context, if not, we would set some state and self-IPI and let the
resulting interrupt do the wakeup.
For the various event classes:
- hardware: nmi=0; PMI is in fact an NMI or we run irq_work_run from
the PMI-tail (ARM etc.)
- tracepoint: nmi=0; since tracepoint could be from NMI context.
- software: nmi=[0,1]; some, like the schedule thing cannot
perform wakeups, and hence need 0.
As one can see, there is very little nmi=1 usage, and the down-side of
not using it is that on some platforms some software events can have a
jiffy delay in wakeup (when arch_irq_work_raise isn't implemented).
The up-side however is that we can remove the nmi parameter and save a
bunch of conditionals in fast paths.
Signed-off-by: Peter Zijlstra <[email protected]>
Cc: Michael Cree <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Deng-Cheng Zhu <[email protected]>
Cc: Anton Blanchard <[email protected]>
Cc: Eric B Munson <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: Paul Mundt <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Frederic Weisbecker <[email protected]>
Cc: Jason Wessel <[email protected]>
Cc: Don Zickus <[email protected]>
Link: http://lkml.kernel.org/n/[email protected]
Signed-off-by: Ingo Molnar <[email protected]> | a8b0ca17b80e92faab46ee7179ba9e99ccb61233 | linux | bigvul | 1 | null | null | null |
[SCSI] pmcraid: reject negative request size
There's a code path in pmcraid that can be reached via device ioctl that
causes all sorts of ugliness, including heap corruption or triggering the
OOM killer due to consecutive allocation of large numbers of pages.
First, the user can call pmcraid_chr_ioctl(), with a type
PMCRAID_PASSTHROUGH_IOCTL. This calls through to
pmcraid_ioctl_passthrough(). Next, a pmcraid_passthrough_ioctl_buffer
is copied in, and the request_size variable is set to
buffer->ioarcb.data_transfer_length, which is an arbitrary 32-bit
signed value provided by the user. If a negative value is provided
here, bad things can happen. For example,
pmcraid_build_passthrough_ioadls() is called with this request_size,
which immediately calls pmcraid_alloc_sglist() with a negative size.
The resulting math on allocating a scatter list can result in an
overflow in the kzalloc() call (if num_elem is 0, the sglist will be
smaller than expected), or if num_elem is unexpectedly large the
subsequent loop will call alloc_pages() repeatedly, a high number of
pages will be allocated and the OOM killer might be invoked.
It looks like preventing this value from being negative in
pmcraid_ioctl_passthrough() would be sufficient.
Signed-off-by: Dan Rosenberg <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: James Bottomley <[email protected]> | b5b515445f4f5a905c5dd27e6e682868ccd6c09d | linux | bigvul | 1 | null | null | null |
af_packet: prevent information leak
In 2.6.27, commit 393e52e33c6c2 (packet: deliver VLAN TCI to userspace)
added a small information leak.
Add padding field and make sure its zeroed before copy to user.
Signed-off-by: Eric Dumazet <[email protected]>
CC: Patrick McHardy <[email protected]>
Signed-off-by: David S. Miller <[email protected]> | 13fcb7bd322164c67926ffe272846d4860196dc6 | linux | bigvul | 1 | null | null | null |
xtensa: prevent arbitrary read in ptrace
Prevent an arbitrary kernel read. Check the user pointer with access_ok()
before copying data in.
[[email protected]: s/EIO/EFAULT/]
Signed-off-by: Dan Rosenberg <[email protected]>
Cc: Christian Zankel <[email protected]>
Cc: Oleg Nesterov <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]> | 0d0138ebe24b94065580bd2601f8bb7eb6152f56 | linux | bigvul | 1 | null | null | null |
ipv6: make fragment identifications less predictable
IPv6 fragment identification generation is way beyond what we use for
IPv4 : It uses a single generator. Its not scalable and allows DOS
attacks.
Now inetpeer is IPv6 aware, we can use it to provide a more secure and
scalable frag ident generator (per destination, instead of system wide)
This patch :
1) defines a new secure_ipv6_id() helper
2) extends inet_getid() to provide 32bit results
3) extends ipv6_select_ident() with a new dest parameter
Reported-by: Fernando Gont <[email protected]>
Signed-off-by: Eric Dumazet <[email protected]>
Signed-off-by: David S. Miller <[email protected]> | 87c48fa3b4630905f98268dde838ee43626a060c | linux | bigvul | 1 | null | null | null |
perf, x86: Fix Intel fixed counters base initialization
The following patch solves the problems introduced by Robert's
commit 41bf498 and reported by Arun Sharma. This commit gets rid
of the base + index notation for reading and writing PMU msrs.
The problem is that for fixed counters, the new calculation for
the base did not take into account the fixed counter indexes,
thus all fixed counters were read/written from fixed counter 0.
Although all fixed counters share the same config MSR, they each
have their own counter register.
Without:
$ task -e unhalted_core_cycles -e instructions_retired -e baclears noploop 1 noploop for 1 seconds
242202299 unhalted_core_cycles (0.00% scaling, ena=1000790892, run=1000790892)
2389685946 instructions_retired (0.00% scaling, ena=1000790892, run=1000790892)
49473 baclears (0.00% scaling, ena=1000790892, run=1000790892)
With:
$ task -e unhalted_core_cycles -e instructions_retired -e baclears noploop 1 noploop for 1 seconds
2392703238 unhalted_core_cycles (0.00% scaling, ena=1000840809, run=1000840809)
2389793744 instructions_retired (0.00% scaling, ena=1000840809, run=1000840809)
47863 baclears (0.00% scaling, ena=1000840809, run=1000840809)
Signed-off-by: Stephane Eranian <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
LKML-Reference: <20110319172005.GB4978@quad>
Signed-off-by: Ingo Molnar <[email protected]> | fc66c5210ec2539e800e87d7b3a985323c7be96e | linux | bigvul | 1 | null | null | null |
TOMOYO: Fix oops in tomoyo_mount_acl().
In tomoyo_mount_acl() since 2.6.36, kern_path() was called without checking
dev_name != NULL. As a result, an unprivileged user can trigger oops by issuing
mount(NULL, "/", "ext3", 0, NULL) request.
Fix this by checking dev_name != NULL before calling kern_path(dev_name).
Signed-off-by: Tetsuo Handa <[email protected]>
Cc: [email protected]
Signed-off-by: James Morris <[email protected]> | 4e78c724d47e2342aa8fde61f6b8536f662f795f | linux | bigvul | 1 | null | null | null |
nl80211: fix check for valid SSID size in scan operations
In both trigger_scan and sched_scan operations, we were checking for
the SSID length before assigning the value correctly. Since the
memory was just kzalloc'ed, the check was always failing and SSID with
over 32 characters were allowed to go through.
This was causing a buffer overflow when copying the actual SSID to the
proper place.
This bug has been there since 2.6.29-rc4.
Cc: [email protected]
Signed-off-by: Luciano Coelho <[email protected]>
Signed-off-by: John W. Linville <[email protected]> | 208c72f4fe44fe09577e7975ba0e7fa0278f3d03 | linux | bigvul | 1 | null | null | null |
mm: avoid wrapping vm_pgoff in mremap()
The normal mmap paths all avoid creating a mapping where the pgoff
inside the mapping could wrap around due to overflow. However, an
expanding mremap() can take such a non-wrapping mapping and make it
bigger and cause a wrapping condition.
Noticed by Robert Swiecki when running a system call fuzzer, where it
caused a BUG_ON() due to terminally confusing the vma_prio_tree code. A
vma dumping patch by Hugh then pinpointed the crazy wrapped case.
Reported-and-tested-by: Robert Swiecki <[email protected]>
Acked-by: Hugh Dickins <[email protected]>
Cc: [email protected]
Signed-off-by: Linus Torvalds <[email protected]> | 982134ba62618c2d69fbbbd166d0a11ee3b7e3d8 | linux | bigvul | 1 | null | null | null |
proc: restrict access to /proc/PID/io
/proc/PID/io may be used for gathering private information. E.g. for
openssh and vsftpd daemons wchars/rchars may be used to learn the
precise password length. Restrict it to processes being able to ptrace
the target process.
ptrace_may_access() is needed to prevent keeping open file descriptor of
"io" file, executing setuid binary and gathering io information of the
setuid'ed process.
Signed-off-by: Vasiliy Kulikov <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]> | 1d1221f375c94ef961ba8574ac4f85c8870ddd51 | linux | bigvul | 1 | null | null | null |