Commit Graph

160 Commits

Author SHA1 Message Date
Richard Henderson
85aa80813d tcg: Support arbitrary size + alignment
Previously we allowed fully unaligned operations, but not operations
that are aligned but with less alignment than the operation size.

In addition, arm32, ia64, mips, and sparc had been omitted from the
previous overalignment patch, which would have led to that alignment
being enforced.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-09-16 08:12:06 -07:00
Markus Armbruster
14e54f8ecf tcg: Clean up tcg-target.h header guards
These use guard symbols like TCG_TARGET_$target.
scripts/clean-header-guards.pl doesn't like them because they don't
match their file name (they should, to make guard collisions less
likely).

Clean them up: use guard symbol $target_TCG_TARGET_H for
tcg/$target/tcg-target.h.

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2016-07-12 16:19:16 +02:00
Sergey Sorokin
1f00b27f17 tcg: Improve the alignment check infrastructure
Some architectures (e.g. ARMv8) need the address which is aligned
to a size more than the size of the memory access.
To support such check it's enough the current costless alignment
check implementation in QEMU, but we need to support
an alignment size specifying.

Signed-off-by: Sergey Sorokin <afarallax@yandex.ru>
Message-Id: <1466705806-679898-1-git-send-email-afarallax@yandex.ru>
Signed-off-by: Richard Henderson <rth@twiddle.net>
[rth: Assert in tcg_canonicalize_memop.  Leave get_alignment_bits
available for, though unused by, user-mode.  Retain logging difference
based on ALIGNED_ONLY.]
2016-07-05 20:50:13 -07:00
Richard Henderson
59d7c14eef tcg: Optimize spills of constants
While we can store constants via constrants on INDEX_op_st_i32 et al,
we weren't able to spill constants to backing store.

Add a new backend interface, tcg_out_sti, which may store the constant
(and is allowed to fail).  Rearrange the temp_* helpers so that we only
attempt to directly store a constant when the temp is becoming dead/free.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-07-05 20:50:13 -07:00
Sergey Fedorov
f309101c26 tcg: Clean up direct block chaining data fields
Briefly describe in a comment how direct block chaining is done. It
should help in understanding of the following data fields.

Rename some fields in TranslationBlock and TCGContext structures to
better reflect their purpose (dropping excessive 'tb_' prefix in
TranslationBlock but keeping it in TCGContext):
   tb_next_offset  =>  jmp_reset_offset
   tb_jmp_offset   =>  jmp_insn_offset
   tb_next         =>  jmp_target_addr
   jmp_next        =>  jmp_list_next
   jmp_first       =>  jmp_list_first

Avoid using a magic constant as an invalid offset which is used to
indicate that there's no n-th jump generated.

Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-05-12 14:06:41 -10:00
Sergey Fedorov
399f164857 tcg/ppc: Make direct jump patching thread-safe
Ensure direct jump patching in PPC is atomic by:
 * limiting translation buffer size in 32-bit mode to be addressable by
   Branch I-form instruction;
 * using atomic_read()/atomic_set() for code patching.

Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <1461341333-19646-5-git-send-email-sergey.fedorov@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-05-12 14:06:40 -10:00
Aurelien Jarno
8d8fdbae01 tcg: check for CONFIG_DEBUG_TCG instead of NDEBUG
Check for CONFIG_DEBUG_TCG instead of NDEBUG, drop now useless code.

Cc: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Message-id: 1461228530-14852-2-git-send-email-aurelien@aurel32.net
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-04-21 15:43:20 +01:00
Aurelien Jarno
eabb7b91b3 tcg: use tcg_debug_assert instead of assert (fix performance regression)
The TCG code is quite performance sensitive, but at the same time can
also be quite tricky. That is why asserts that can be enabled with the
--enable-debug-tcg configure option.

This used to work the following way:

| #include "config.h"
|
| ...
|
| #if !defined(CONFIG_DEBUG_TCG) && !defined(NDEBUG)
| /* define it to suppress various consistency checks (faster) */
| #define NDEBUG
| #endif
|
| ...
|
| #include <assert.h>

Since commit 757e725b (tcg: Clean up includes) "config.h" as been
replaced by "qemu/osdep.h" which itself includes <assert.h>. As a
consequence the assertions are always enabled, even when using
--disable-debug-tcg, causing a performance regression, especially on
targets with many registers. For instance on qemu-system-ppc the
speed difference is about 15%.

tcg_debug_assert is controlled directly by CONFIG_DEBUG_TCG and already
uses in some places. This patch replaces all the calls to assert into
calss to tcg_debug_assert.

Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Message-id: 1461228530-14852-1-git-send-email-aurelien@aurel32.net
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-04-21 15:41:47 +01:00
Peter Maydell
c3b7f66800 tcg: Remove unnecessary osdep.h includes from tcg-target.inc.c
Commit 757e725b58 added a number of #include "qemu/osdep.h"
files to the tcg-target.c files (as they were named at the time).
These are unnecessary because these files are not standalone C
files, and the tcg/tcg.c file which includes them will have
already included osdep.h on their behalf. Remove the unneeded
include directives.

Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <1456238983-10160-4-git-send-email-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-02-23 08:31:03 -08:00
Peter Maydell
ce15110981 tcg: Rename tcg-target.c to tcg-target.inc.c
Rename the per-architecture tcg-target.c files to tcg-target.inc.c.
This makes it clearer that they are not intended to be standalone
C files, but are instead #included into another source file.

Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <1456238983-10160-2-git-send-email-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-02-23 08:30:38 -08:00
Peter Maydell
757e725b58 tcg: Clean up includes
Clean up includes so that osdep.h is included first and headers
which it implies are not included manually.

This commit was created with scripts/clean-includes.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1453832250-766-16-git-send-email-peter.maydell@linaro.org
2016-01-29 15:07:23 +00:00
Richard Henderson
1e1df962e3 tcg/ppc: Prefer mask over andi.
Prefer the instruction that isn't required to modify cr0.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-10-19 11:04:38 -10:00
Richard Henderson
5bfd75a35c tcg/ppc: Revise goto_tb implementation
Restrict the size of code_gen_buffer to 2GB on ppc64, which
lets us assert that everything is reachable with addis+addi
from tb_ret_addr.  This lets us use a max of 4 insns for goto_tb
instead of 7.

Emit the indirect branch portion of goto_tb up front, which
means we only have to update two insns to update any link.
With a 64-bit store, we can update the link atomically, which
may be required in future.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-10-19 11:04:37 -10:00
Richard Henderson
70f897bdc4 tcg/ppc: Adjust exit_tb for change in prologue placement
Changing the prologue to the beginning of the code_gen_buffer
changes the direction of the "return" branch.  Need to change
the logic to match.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-10-19 11:04:37 -10:00
Laurent Vivier
b76f21a707 linux-user: remove useless macros GUEST_BASE and RESERVED_VA
As we have removed CONFIG_USE_GUEST_BASE, we always use a guest base
and the macros GUEST_BASE and RESERVED_VA become useless: replace
them by their values.

Reviewed-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <1440420834-8388-1-git-send-email-laurent@vivier.eu>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-08-24 11:14:30 -07:00
Laurent Vivier
4cbea59869 linux-user: remove --enable-guest-base/--disable-guest-base
All tcg host architectures now support the guest base and as
there is no real performance lost, it can be always enabled.

Anyway, guest base use can be disabled lively by setting guest
base to 0.

CONFIG_USE_GUEST_BASE is defined as (USE_GUEST_BASE && USER_ONLY),
it should have to be replaced by CONFIG_USER_ONLY in non CONFIG_USER_ONLY
parts, but as some other parts are using !CONFIG_SOFTMMU I have chosen to
use !CONFIG_SOFTMMU instead.

Reviewed-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <1440373328-9788-2-git-send-email-laurent@vivier.eu>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-08-24 11:14:17 -07:00
Benjamin Herrenschmidt
68d45bb61c tcg/ppc: Improve unaligned load/store handling on 64-bit backend
Currently, we get to the slow path for any unaligned access in the
backend, because we effectively preserve the bottom address bits
below the alignment requirement when comparing with the TLB entry,
so any non-0 bit there will cause the compare to fail.

For the same number of instructions, we can instead add the access
size - 1 to the address and stick to clearing all the bottom bits.

That means that normal unaligned accesses will not fallback (the HW
will handle them fine). Only when crossing a page boundary well we
end up having a mismatch because we'll end up pointing to the next
page which cannot possibly be in that same TLB entry.

Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Message-Id: <1437455978.5809.2.camel@kernel.crashing.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-08-24 11:10:54 -07:00
Richard Henderson
609ad70562 tcg: Split trunc_shr_i32 opcode into extr[lh]_i64_i32
Rather than allow arbitrary shift+trunc, only concern ourselves
with low and high parts.  This is all that was being used anyway.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-08-24 11:10:54 -07:00
Aurelien Jarno
4f2331e5b6 tcg: implement real ext_i32_i64 and extu_i32_i64 ops
Implement real ext_i32_i64 and extu_i32_i64 ops. They ensure that a
32-bit value is always converted to a 64-bit value and not propagated
through the register allocator or the optimizer.

Cc: Andrzej Zaborowski <balrogg@gmail.com>
Cc: Alexander Graf <agraf@suse.de>
Cc: Blue Swirl <blauwirbel@gmail.com>
Cc: Stefan Weil <sw@weilnetz.de>
Acked-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-08-24 11:10:54 -07:00
Aurelien Jarno
0632e555fc tcg: rename trunc_shr_i32 into trunc_shr_i64_i32
The op is sometimes named trunc_shr_i32 and sometimes trunc_shr_i64_i32,
and the name in the README doesn't match the name offered to the
frontends.

Always use the long name to make it clear it is a size changing op.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-08-24 11:10:54 -07:00
Richard Henderson
2b7ec66f02 tcg: Mask TCGMemOp appropriately for indexing
The addition of MO_AMASK means that places that used inverted masks
need to be changed to use positive masks, and places that failed to
mask the intended bits need updating.

Reviewed-by: Yongbok Kim <yongbok.kim@imgtec.com>
Tested-by: Yongbok Kim <yongbok.kim@imgtec.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-06-09 06:35:29 -07:00
Paolo Bonzini
006f8638c6 tcg: add TCG_TARGET_TLB_DISPLACEMENT_BITS
This will be used to size the TLB when more than 8 MMU modes are
used by the target.  Limitations come from the limited size of
the immediate fields (which sometimes, as in the case of Aarch64,
extend to instructions that shift the immediate).

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <1424436345-37924-2-git-send-email-pbonzini@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
2015-06-03 23:56:56 +02:00
Richard Henderson
3972ef6f83 tcg: Push merged memop+mmu_idx parameter to softmmu routines
The extra information is not yet used but it is now available.
This requires minor changes through all of the tcg backends.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-05-14 12:15:14 -07:00
Richard Henderson
59227d5d45 tcg: Merge memop and mmu_idx parameters to qemu_ld/st
At the tcg opcode level, not at the tcg-op.h generator level.
This requires minor changes through all of the tcg backends,
but none of the cpu translators.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-05-14 12:14:55 -07:00
Richard Henderson
bec1631100 tcg: Change generator-side labels to a pointer
This is less about improved type checking than enabling a
subsequent change to the representation of labels.

Acked-by: Claudio Fontana <claudio.fontana@huawei.com>
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
Cc: Andrzej Zaborowski <balrogg@gmail.com>
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Blue Swirl <blauwirbel@gmail.com>
Cc: Stefan Weil <sw@weilnetz.de>
Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-03-13 12:28:18 -07:00
Peter Maydell
1045fc0439 tcg/ppc: Fix support for 64-bit PPC MacOSX hosts
Add back in the support for 64-bit PPC MacOSX hosts that was
broken in the recent merge of the 32-bit and 64-bit TCG backends.

Reported-by: Andreas Färber <andreas.faerber@web.de>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Tested-by: Andreas Färber <andreas.faerber@web.de>
2014-06-29 11:38:50 +01:00
Richard Henderson
d4cba13bdf tcg/ppc: Fix failure in tcg_out_mem_long
With rt != r0 on loads, we use rt for scratch.  If we need an index
register different from base, we can't use rt, but r0 is usable.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-id: 1403843160-30332-1-git-send-email-rth@twiddle.net
Tested-by: Cédric Le Goater <clg@fr.ibm.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-06-27 13:23:41 +01:00
Richard Henderson
a84ac4cbbb tcg-ppc: Use the return address as a base pointer
This can significantly reduce code size for generation of (some)
64-bit constants.  With the side effect that we know for a fact
that exit_tb can use the register to good effect.

Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-06-23 07:32:33 -07:00
Richard Henderson
224f9fd419 tcg-ppc: Merge cache-utils into the backend
As a "utility", it only supported ppc, and in a way that other
tcg backends provided directly in tcg-target.h.  Removing this
disparity is easier now that the two ppc backends are merged.

Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-06-23 07:32:30 -07:00
Richard Henderson
40d964b563 tcg-ppc: Rename the tcg/ppc64 backend
The other tcg backends that support 32- and 64-bit modes
use the 32-bit name for the port.  Follow suit.

Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-06-23 07:32:23 -07:00
Richard Henderson
b38daef9d4 tcg-ppc: Remove the backend
Vectoring the 32-bit build to the ppc64 directory.

Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-06-23 07:32:18 -07:00
Richard Henderson
9171478c95 tcg-ppc: Use uintptr_t in ppc_tb_set_jmp_target
Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-06-23 07:29:30 -07:00
Richard Henderson
3d1b2ff62c tcg: Remove TCG_TARGET_HAS_new_ldst
Since all backends have been converted, remove the compatibility code.

Acked-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-06-04 14:10:26 -07:00
Richard Henderson
96d0ee7f09 tcg: Remove unreachable code in tcg_out_op and op_defs
The INDEX_op_call case has just been obsoleted; the mov and movi
cases have not been reachable for years.  Attempt to document this
both in each tcg_out_op switch, and via TCG_OPF_NOT_PRESENT.

Because of the TCG_OPF_NOT_PRESENT change, this must be done for
all targets in a single commit.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 11:13:13 -07:00
Richard Henderson
00d7a1acab tcg-ppc: Split out tcg_out_call
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 11:13:12 -07:00
Richard Henderson
38cf39f739 tcg-ppc: Define TCG_TARGET_INSN_UNIT_SIZE
And use tcg pointer differencing functions as appropriate.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 10:03:04 -07:00
Richard Henderson
02eb19d0ec tcg: Use HOST_WORDS_BIGENDIAN
Instead of rolling a local TCG_TARGET_WORDS_BIGENDIAN.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-18 16:57:37 -07:00
Richard Henderson
df9ebea53e tcg: Relax requirement for mulu2_i32 on 32-bit hosts
Instead require either mulu2_i32 or muluh_i32.  The code in tcg-op.h
already supports looking for both.  Previous incomplete conversion?

Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-18 16:57:37 -07:00
Richard Henderson
f6c6afc1d4 tcg: Add TCGType parameter to tcg_target_const_match
Most 64-bit targets need to be able to ignore the high bits
of a TCG_TYPE_I32 value.

Suggested-by: Stuart Brady <sdb@zubnet.me.uk>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-18 16:57:36 -07:00
Stefan Weil
ad5171dbd4 tcg: Fix warning (1 bit signed bitfield entry) and replace int by bool
Static code analyzers complain about signed bitfields with only a single
bit. is_ld is used as a boolean value, so make it bool.

ppc64 already used bool for the 2nd argument is_ld of the local function
add_qemu_ldst_label. Modify all other TCG targets to do follow this
example.

Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-18 16:57:36 -07:00
Richard Henderson
5dd391604f tcg-ppc: Support new ldst opcodes
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-12 16:19:20 -07:00
Richard Henderson
92d0acda27 tcg-ppc: Convert to le/be ldst helpers
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-12 16:19:20 -07:00
Richard Henderson
f1a16dcdd5 tcg-ppc: Use TCGMemOp within qemu_ldst routines
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-12 16:19:20 -07:00
Richard Henderson
f713d6ad7b tcg: Add qemu_ld_st_i32/64
Step two in the transition, adding the new ldst opcodes.  Keep the old
opcodes around until all backends support the new opcodes.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-10 13:19:21 -07:00
Richard Henderson
9ecefc84dd tcg: Add tcg-be-ldst.h
Move TCGLabelQemuLdst and related stuff out of tcg.h.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-10-10 11:44:26 -07:00
Richard Henderson
8f50c841b3 tcg-ppc: Fix and cleanup tcg_out_tlb_check
The fix is that sparc has so many mmu modes that the last one overflowed
the 16-bit signed offset we assumed would fit.  Handle this, and check
the new assumption at compile time.

Load the tlb addend earlier for the fast path.

Remove the explicit address + addend and make use of index addressing.

Adjust constraints for qemu_ld64 such that we don't clobber the address
register or tlb addend before loading both values.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-09-25 07:46:31 -07:00
Richard Henderson
5b1c985b7e tcg-ppc: Use conditional branch and link to slow path
Saves one insn per slow path.  Note that we can no longer use
a tail call into the store helper.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-09-25 07:46:31 -07:00
Richard Henderson
1d10cf9886 tcg-ppc: Cleanup tcg_out_qemu_ld/st_slow_path
Coding style fixes.  Use TCGReg enumeration values instead of raw
numbers.  Don't needlessly pull the whole TCGLabelQemuLdst struct
into local variables.  Less conditional compilation.

No functional changes.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-09-25 07:46:31 -07:00
Richard Henderson
4b2b114d8c tcg-ppc: Avoid code for nop move
While these are rare from code that's been through the optimizer,
it's not uncommon within the tcg backend.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-09-25 07:46:31 -07:00
Paolo Bonzini
619f90ba62 tcg-ppc: use new return-argument ld/st helpers
These use a 32-bit load-of-immediate to save a mflr+addi+mtlr sequence.
Tested with a Windows 98 guest (pretty much the most recent thing I
could run on my PPC machine) and kvm-unit-tests's sieve.flat.  The
speed up for sieve.flat is as high as 10% for qemu-system-i386, 25%
(no kidding) for qemu-system-x86_64 on my PowerBook G4.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-09-25 07:45:39 -07:00