This adds a visitor that is capable of accessing type
records randomly and caching intermediate results that it
learns about during partial linear scans. This yields
amortized O(1) access to a type stream even though type
streams cannot normally be indexed.
Differential Revision: https://reviews.llvm.org/D33009
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@302936 91177308-0d34-0410-b5e6-96231b3b80d8
When randomly accessing an element by offset, we weren't passing
the offset through so if you called .offset() it would return a
value of 0.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@302292 91177308-0d34-0410-b5e6-96231b3b80d8
This is similar to my recent fix for VarStreamArrayIterator, but the cause
(and thus the fix) is subtley different. The FixedStreamArrayIterator
iterates over a const Array, so the iterator's value type must be const.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@302257 91177308-0d34-0410-b5e6-96231b3b80d8
I tried to run llvm-pdbdump on a very large (~1.5GB) PDB to
try and identify show-stopping performance problems. This
patch addresses the first such problem.
When loading the DBI stream, before anyone has even tried to
access a single record, we build an in memory map of every
source file for every module. In the particular PDB I was
using, this was over 85 million files. Specifically, the
complexity is O(m*n) where m is the number of modules and
n is the average number of source files (including headers)
per module.
The whole reason for doing this was so that we could have
constant time access to any module and any of its source
file lists. However, we can still get O(1) access to the
source file list for a given module with a simple O(m)
precomputation, and access to the list of modules is
already O(1) anyway.
So this patches reduces the O(m*n) up-front precomputation
to an O(m) one, where n is ~6,500 and n*m is about 85 million
in my pathological test case.
Differential Revision: https://reviews.llvm.org/D32870
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@302205 91177308-0d34-0410-b5e6-96231b3b80d8
The raw CodeView format references strings by "offsets", but it's
confusing what table the offset refers to. In the case of line
number information, it's an offset into a buffer of records,
and an indirection is required to get another offset into a
different table to find the final string. And in the case of
checksum information, there is no indirection, and the offset
refers directly to the location of the string in another buffer.
This would be less confusing if we always just referred to the
strings by their value, and have the library be smart enough
to correctly resolve the offsets on its own from the right
location.
This patch makes that possible. When either reading or writing,
all the user deals with are strings, and the library does the
appropriate translations behind the scenes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@302053 91177308-0d34-0410-b5e6-96231b3b80d8
This was reported by the ASAN bot, and it turned out to be
a fairly fundamental problem with the design of VarStreamArray
and the way it passes context information to the extractor.
The fix was cumbersome, and I'm not entirely pleased with it,
so I plan to revisit this design in the future when I'm not
pressed to get the bots green again. For now, this fixes
the issue by storing the context information by value instead
of by reference, and introduces some impossibly-confusing
template magic to make things "work".
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@301999 91177308-0d34-0410-b5e6-96231b3b80d8