It's no longer needed, now that legacy extensions aren't supported.
Pieces removed include the following.
- The "load-extension-default" observer notification.
- The code for reading defaults/preferences/*.js from extensions.
- The unit test for this stuff.
- A crash reporter annotation relating to very long prefs set by add-ons.
- All references to "ExtPrefDL".
MozReview-Commit-ID: KMBoYn3uZ3x
--HG--
extra : rebase_source : 4dc8ffd425c6cdf06806409090c4f9d04a64930b
* In the first stage, we fetch changed records, newest first, up to the
download limit. We keep track of the oldest record modified time we
see.
* Once we've fetched all records, we reconcile, noting records that
fail to decrypt or reconcile for the next sync. We then ask the store
to apply all remaining records. Previously, `applyIncomingBatchSize`
specified how many records to apply at a time. I removed this because
it added an extra layer of indirection that's no longer necessary,
now that download batching buffers all records in memory, and all
stores are async.
* In the second stage, we fetch IDs for all remaining records changed
between the last sync and the oldest modified time we saw in the
first stage. We *don't* set the download limit here, to ensure we
add *all* changed records to our backlog, and we use the `"oldest"`
sort order instead of `"index"`.
* In the third stage, we backfill as before. We don't want large deltas
to delay other engines from syncing, so we still only take IDs up to
the download limit from the backlog, and include failed IDs from the
previous sync. On subsequent syncs, we'll keep fetching from the
backlog until it's empty.
Other changes to note in this patch:
* `Collection::_rebuildURL` now allows callers to specify both `older`
and `newer`. According to :rfkelly, this is explicitly and
intentionally supported.
* Tests that exercise `applyIncomingBatchSize` are gone, since that's
no longer a thing.
* The test server now shuffles records if the sort order is
unspecified.
MozReview-Commit-ID: 4EXvNOa8mIo
--HG--
extra : rebase_source : f382f0a883c5aa1f6a4466fefe22ad1a88ab6d20
The test captures the existing logic in `_processIncoming`, even though
it's not quite correct:
* First, we fetch all records changed since the last sync, up to the
download limit, and without an explicit sort order. This happens to
work correctly now because the Python server uses "newest" by
default, but can change in the future.
* If we reached the download limit fetching records, we request
IDs for all records changed since the last sync, also up to the
download limit, and sorted by index. This is likely to return IDs
for records we've already seen, since the index is based on the
frecency. It's also likely to miss IDs for other changed records,
because the number of changed records might be higher than the
download limit.
* Since we then fast-forward the last sync time, we'll never download
any remaining changed records that we didn't add to our backlog.
* Finally, we backfill previously failed and backlogged records.
MozReview-Commit-ID: 7uQLXMseMIU
--HG--
extra : rebase_source : 719ee2d9e46102195251b410f093da3247095c22
* In the first stage, we fetch changed records, newest first, up to the
download limit. We keep track of the oldest record modified time we
see.
* Once we've fetched all records, we reconcile, noting records that
fail to decrypt or reconcile for the next sync. We then ask the store
to apply all remaining records. Previously, `applyIncomingBatchSize`
specified how many records to apply at a time. I removed this because
it added an extra layer of indirection that's no longer necessary,
now that download batching buffers all records in memory, and all
stores are async.
* In the second stage, we fetch IDs for all remaining records changed
between the last sync and the oldest modified time we saw in the
first stage. We *don't* set the download limit here, to ensure we
add *all* changed records to our backlog, and we use the `"oldest"`
sort order instead of `"index"`.
* In the third stage, we backfill as before. We don't want large deltas
to delay other engines from syncing, so we still only take IDs up to
the download limit from the backlog, and include failed IDs from the
previous sync. On subsequent syncs, we'll keep fetching from the
backlog until it's empty.
Other changes to note in this patch:
* `Collection::_rebuildURL` now allows callers to specify both `older`
and `newer`. According to :rfkelly, this is explicitly and
intentionally supported.
* Tests that exercise `applyIncomingBatchSize` are gone, since that's
no longer a thing.
* The test server now shuffles records if the sort order is
unspecified.
MozReview-Commit-ID: 4EXvNOa8mIo
--HG--
extra : rebase_source : 13605dd3a43569a6d83dc2eb15a578a7bbd5c1ca
The test captures the existing logic in `_processIncoming`, even though
it's not quite correct:
* First, we fetch all records changed since the last sync, up to the
download limit, and without an explicit sort order. This happens to
work correctly now because the Python server uses "newest" by
default, but can change in the future.
* If we reached the download limit fetching records, we request
IDs for all records changed since the last sync, also up to the
download limit, and sorted by index. This is likely to return IDs
for records we've already seen, since the index is based on the
frecency. It's also likely to miss IDs for other changed records,
because the number of changed records might be higher than the
download limit.
* Since we then fast-forward the last sync time, we'll never download
any remaining changed records that we didn't add to our backlog.
* Finally, we backfill previously failed and backlogged records.
MozReview-Commit-ID: 7uQLXMseMIU
--HG--
extra : rebase_source : 5742474889845b934c3d2e8b479d26d719cd03c0
The timestamps are automatically truncated when they're stored in
prefs, which is fine because we don't need millisecond precision.
However, the truncation raises a warning, so we need to explicitly
floor the values.
MozReview-Commit-ID: BRflL6s0b1
--HG--
extra : rebase_source : 6724a1ad05cb8aee4ab8c666545784960c23a0f3
The dump file isn't in the expected location in firefox tests, but is in
thunderbird tests, so the preference to disable loading wasn't originally
implemented.
MozReview-Commit-ID: HvFqfC69yMQ
--HG--
extra : rebase_source : 1d358292f0ab94299e444f4d3e3454a2259d1a64
It now follows the setting of the identity.fxaccounts.allowHttp preference.
MozReview-Commit-ID: 9646Xi48QMP
--HG--
extra : rebase_source : 630e65bebc00e755ca3be1d159e08fec738d590f
Also fixes an issue where we wouldn't encode to utf8 when comparing the actual
size to the limit after the first time.
MozReview-Commit-ID: Cf3byjI1FTZ
--HG--
extra : rebase_source : 272ec3b3ad85f8b44c4d69950be83419054abdab