Bug 1476636. Update webrender to commit 9f21ee5dba0694818a1e2e46d95734ede281447c

This commit is contained in:
Jeff Muizelaar 2018-07-20 17:24:47 -04:00
parent 31da9ce5d8
commit 8ebc4e4580
39 changed files with 1064 additions and 410 deletions

View File

@ -24,14 +24,14 @@ bincode = "1.0"
bitflags = "1.0"
byteorder = "1.0"
cfg-if = "0.1.2"
euclid = "0.17.3"
euclid = "0.18"
fxhash = "0.2.1"
gleam = "0.5"
gleam = "0.6"
image = { optional = true, version = "0.19" }
lazy_static = "1"
log = "0.4"
num-traits = "0.1.43"
plane-split = "0.9.1"
plane-split = "0.10"
png = { optional = true, version = "0.12" }
rayon = "1"
ron = { optional = true, version = "0.1.7" }

17
gfx/webrender/doc/blob.md Normal file
View File

@ -0,0 +1,17 @@
# Blob images
The blob image mechanism now has two traits:
- [`BlobImageHandler`](https://github.com/servo/webrender/pull/2785/files#diff-2b72a28a40b83edf41a59adfd46b1a11R188) is roughly the equivalent of the previous `BlobImageRenderer` except that it doesn't do any rendering (it manages the state of the blob commands, and resources like fonts).
- [`AsyncBlobImageRasterizer`](https://github.com/servo/webrender/pull/2785/files#diff-2b72a28a40b83edf41a59adfd46b1a11R211) is created by the handler and sent over to the scene builder thread. the async rasterizer is meant to be a snapshot of the state of blob image commands that can execute the commands if provided some requests.
When receiving a transaction, the render backend / resource cache look at the list of added and updated blob images in that transaction, [collect the list of blob images and tiles that need to be rendered](https://github.com/servo/webrender/pull/2785/files#diff-77cbdf7ba9ebae81feb38a64c21b8454R848), create a rasterizer, and ship the two to the scene builder.
After building the scene the rasterizer gets handed the list of blob requests and [does all of the rasterization](https://github.com/servo/webrender/pull/2785/files#diff-856af4d4ff2333d4204e7e5a87a93c58R153), blocking the scene builder thread until the work is done.
When the scene building and rasterization is done, the render backend receives the rasterized blobs and [stores them](https://github.com/servo/webrender/pull/2785/files#diff-77cbdf7ba9ebae81feb38a64c21b8454R520) so that they are available when frame building needs them.
Because blob images can be huge, we don't always want to rasterize them entirely during scene building. To decide what should be rasterized, we rely on gecko giving us a hint through the added `set_image_visible_area` API. When the render backend receives that message [it decides which tiles are going to be rasterized](https://github.com/servo/webrender/pull/2785/files#diff-77cbdf7ba9ebae81feb38a64c21b8454R469). This information is also used to [decide which tiles to evict](https://github.com/servo/webrender/pull/2785/files#diff-77cbdf7ba9ebae81feb38a64c21b8454R430), so that we don't keep thousands of tiles if we scroll through a massive blob image. The idea is for the visible area to correspond to the size of the display list.
Sometimes, however, Gecko gets this visible area "wrong", or at least gives webrender a certain visible area but eventually webrender requests tiles during frame building that weren't in that area. I think that this is inevitable because the culling logic in gecko and webrender works very differently, so relying on them to match exactly is fragile at best.
So to work around this type of situation, [keep around the async blob rasterizer](https://github.com/servo/webrender/pull/2785/files#diff-3722af8f0bcba9c3ce197a9aa3052014R769) that we sent to the scene builder, and store it in the resource cache when we swap the scene. This blob rasterizer represents the state of the blob commands at the time the transaction was built (and is potentially different from the state of the blob image handler). Frame building [collects a list of blob images](https://github.com/servo/webrender/pull/2785/files#diff-77cbdf7ba9ebae81feb38a64c21b8454R811) (or blob tiles) that are not already rasterized, and asks the current async blob rasterizer to rasterize them synchronously on the render backend. The hope is that this would happen rarely.
Another important detail is that for this to work, resources that are used by blob images (so currently only fonts), need to be in sync with the blobs. Fortunately, fonts are currently immutable so we mostly need to make sure they are added [before the transaction](https://github.com/servo/webrender/pull/2785/files#diff-77cbdf7ba9ebae81feb38a64c21b8454R440) is built and [removed after](https://github.com/servo/webrender/pull/2785/files#diff-77cbdf7ba9ebae81feb38a64c21b8454R400) the transaction is swapped. If blob images were to use images, then we'd have to either do the same for these images (and disallow updating them), or maintain the state of images before and after scene building like we effectively do for blobs.

View File

@ -670,17 +670,20 @@ impl AlphaBatchBuilder {
// Push into parent plane splitter.
debug_assert!(picture.surface.is_some());
let real_xf = &ctx.clip_scroll_tree
.spatial_nodes[picture.reference_frame_index.0]
.world_content_transform
.into();
let polygon = make_polygon(
let real_xf = &ctx
.transforms
.get_transform(picture.reference_frame_index);
match make_polygon(
picture.real_local_rect,
real_xf,
&real_xf.m,
prim_index.0,
);
splitter.add(polygon);
) {
Some(polygon) => splitter.add(polygon),
None => {
// this shouldn't happen, the path will ultimately be
// turned into `expect` when the splitting code is fixed
}
}
return;
}
@ -1697,7 +1700,7 @@ fn make_polygon(
rect: LayoutRect,
transform: &LayoutToWorldTransform,
anchor: usize,
) -> Polygon<f64, WorldPixel> {
) -> Option<Polygon<f64, WorldPixel>> {
let mat = TypedTransform3D::row_major(
transform.m11 as f64,
transform.m12 as f64,
@ -1715,7 +1718,7 @@ fn make_polygon(
transform.m42 as f64,
transform.m43 as f64,
transform.m44 as f64);
Polygon::from_transformed_rect(rect.cast().unwrap(), mat, anchor)
Polygon::from_transformed_rect(rect.cast(), mat, anchor)
}
/// Batcher managing draw calls into the clip mask (in the RT cache).

View File

@ -34,8 +34,8 @@ impl ClipNode {
const EMPTY: ClipNode = ClipNode {
spatial_node: SpatialNodeIndex(0),
handle: None,
clip_chain_index: ClipChainIndex(0),
parent_clip_chain_index: ClipChainIndex(0),
clip_chain_index: ClipChainIndex::NO_CLIP,
parent_clip_chain_index: ClipChainIndex::NO_CLIP,
clip_chain_node: None,
};
@ -83,7 +83,8 @@ impl ClipNode {
},
local_clip_rect: spatial_node
.coordinate_system_relative_transform
.transform_rect(&local_outer_rect),
.transform_rect(&local_outer_rect)
.expect("clip node transform is not valid"),
screen_outer_rect,
screen_inner_rect,
prev: None,

View File

@ -62,6 +62,10 @@ pub struct ClipChainDescriptor {
#[derive(Clone, Copy, Debug, Eq, PartialEq)]
pub struct ClipChainIndex(pub usize);
impl ClipChainIndex {
pub const NO_CLIP: Self = ClipChainIndex(0);
}
pub struct ClipScrollTree {
/// Nodes which determine the positions (offsets and transforms) for primitives
/// and clips.

View File

@ -190,6 +190,9 @@ pub struct DisplayListFlattener<'a> {
/// A stack of the currently active shadows
shadow_stack: Vec<(Shadow, PictureIndex)>,
/// The stack keeping track of the root clip chains associated with pipelines.
pipeline_clip_chain_stack: Vec<ClipChainIndex>,
/// A list of scrollbar primitives.
pub scrollbar_prims: Vec<ScrollbarPrimitive>,
@ -240,6 +243,7 @@ impl<'a> DisplayListFlattener<'a> {
picture_stack: Vec::new(),
shadow_stack: Vec::new(),
sc_stack: Vec::new(),
pipeline_clip_chain_stack: Vec::new(),
prim_store: old_builder.prim_store.recycle(),
clip_store: old_builder.clip_store.recycle(),
};
@ -530,7 +534,8 @@ impl<'a> DisplayListFlattener<'a> {
self.id_to_index_mapper.initialize_for_pipeline(pipeline);
self.add_clip_node(
//TODO: use or assert on `clip_and_scroll_ids.clip_node_id` ?
let clip_chain_index = self.add_clip_node(
info.clip_id,
clip_and_scroll_ids.scroll_node_id,
ClipRegion::create_for_clip_node_with_local_clip(
@ -538,6 +543,7 @@ impl<'a> DisplayListFlattener<'a> {
reference_frame_relative_offset
),
);
self.pipeline_clip_chain_stack.push(clip_chain_index);
let bounds = item.rect();
let origin = *reference_frame_relative_offset + bounds.origin.to_vector();
@ -564,6 +570,7 @@ impl<'a> DisplayListFlattener<'a> {
self.flatten_root(pipeline, &iframe_rect.size);
self.pop_reference_frame();
self.pipeline_clip_chain_stack.pop();
}
fn flatten_item<'b>(
@ -733,12 +740,15 @@ impl<'a> DisplayListFlattener<'a> {
}
SpecificDisplayItem::ClipChain(ref info) => {
let items = self.get_clip_chain_items(pipeline_id, item.clip_chain_items())
.iter()
.map(|id| self.id_to_index_mapper.get_clip_node_index(*id))
.collect();
let parent = info.parent.map(|id|
self.id_to_index_mapper.get_clip_chain_index(&ClipId::ClipChain(id))
);
.iter()
.map(|id| self.id_to_index_mapper.get_clip_node_index(*id))
.collect();
let parent = match info.parent {
Some(id) => Some(
self.id_to_index_mapper.get_clip_chain_index(&ClipId::ClipChain(id))
),
None => self.pipeline_clip_chain_stack.last().cloned(),
};
let clip_chain_index =
self.clip_scroll_tree.add_clip_chain_descriptor(parent, items);
self.id_to_index_mapper.add_clip_chain(ClipId::ClipChain(info.id), clip_chain_index);
@ -906,7 +916,7 @@ impl<'a> DisplayListFlattener<'a> {
) {
let clip_chain_id = match clipping_node {
Some(ref clipping_node) => self.id_to_index_mapper.get_clip_chain_index(clipping_node),
None => ClipChainIndex(0), // This means no clipping.
None => ClipChainIndex::NO_CLIP,
};
let clip_and_scroll = ScrollNodeAndClipChain::new(
self.get_spatial_node_index_for_clip_id(spatial_node),
@ -1224,7 +1234,7 @@ impl<'a> DisplayListFlattener<'a> {
match parent_id {
Some(ref parent_id) =>
self.id_to_index_mapper.map_to_parent_clip_chain(reference_frame_id, parent_id),
_ => self.id_to_index_mapper.add_clip_chain(reference_frame_id, ClipChainIndex(0)),
_ => self.id_to_index_mapper.add_clip_chain(reference_frame_id, ClipChainIndex::NO_CLIP),
}
index
}
@ -1278,7 +1288,7 @@ impl<'a> DisplayListFlattener<'a> {
new_node_id: ClipId,
parent_id: ClipId,
clip_region: ClipRegion,
) {
) -> ClipChainIndex {
let clip_sources = ClipSources::from(clip_region);
let handle = self.clip_store.insert(clip_sources);
@ -1293,6 +1303,7 @@ impl<'a> DisplayListFlattener<'a> {
handle,
);
self.id_to_index_mapper.add_clip_chain(new_node_id, clip_chain_index);
clip_chain_index
}
pub fn add_scroll_frame(

View File

@ -72,6 +72,7 @@ pub struct FrameBuildingContext<'a> {
pub pipelines: &'a FastHashMap<PipelineId, Arc<ScenePipeline>>,
pub screen_rect: DeviceIntRect,
pub clip_scroll_tree: &'a ClipScrollTree,
pub clip_chains: &'a [ClipChain],
pub transforms: &'a TransformPalette,
pub max_local_clip: LayoutRect,
}
@ -218,6 +219,7 @@ impl FrameBuilder {
pipelines,
screen_rect: self.screen_rect.to_i32(),
clip_scroll_tree,
clip_chains: &clip_scroll_tree.clip_chains,
transforms: transform_palette,
max_local_clip: LayoutRect::new(
LayoutPoint::new(-MAX_CLIP_COORD, -MAX_CLIP_COORD),
@ -397,7 +399,6 @@ impl FrameBuilder {
device_pixel_scale,
prim_store: &self.prim_store,
resource_cache,
clip_scroll_tree,
use_dual_source_blending,
transforms: &transform_palette,
};

View File

@ -249,7 +249,7 @@ impl FontInstance {
#[allow(dead_code)]
pub fn get_subpx_offset(&self, glyph: &GlyphKey) -> (f64, f64) {
if self.use_subpixel_position() {
let (dx, dy) = glyph.subpixel_offset;
let (dx, dy) = glyph.subpixel_offset();
(dx.into(), dy.into())
} else {
(0.0, 0.0)
@ -287,7 +287,8 @@ impl FontInstance {
if max_size > FONT_SIZE_LIMIT &&
self.transform.is_identity() &&
self.render_mode != FontRenderMode::Subpixel &&
!self.use_subpixel_position() {
!self.use_subpixel_position()
{
max_size / FONT_SIZE_LIMIT
} else {
1.0
@ -372,30 +373,36 @@ impl Into<f64> for SubpixelOffset {
#[derive(Clone, Hash, PartialEq, Eq, Debug, Ord, PartialOrd)]
#[cfg_attr(feature = "capture", derive(Serialize))]
#[cfg_attr(feature = "replay", derive(Deserialize))]
pub struct GlyphKey {
pub index: u32,
pub subpixel_offset: (SubpixelOffset, SubpixelOffset),
}
pub struct GlyphKey(u32);
impl GlyphKey {
pub fn new(
index: u32,
point: DevicePoint,
subpx_dir: SubpixelDirection,
) -> GlyphKey {
) -> Self {
let (dx, dy) = match subpx_dir {
SubpixelDirection::None => (0.0, 0.0),
SubpixelDirection::Horizontal => (point.x, 0.0),
SubpixelDirection::Vertical => (0.0, point.y),
SubpixelDirection::Mixed => (point.x, point.y),
};
let sox = SubpixelOffset::quantize(dx);
let soy = SubpixelOffset::quantize(dy);
assert_eq!(0, index & 0xF0000000);
GlyphKey {
index,
subpixel_offset: (
SubpixelOffset::quantize(dx),
SubpixelOffset::quantize(dy),
),
GlyphKey(index | (sox as u32) << 28 | (soy as u32) << 30)
}
pub fn index(&self) -> GlyphIndex {
self.0 & 0x0FFFFFFF
}
fn subpixel_offset(&self) -> (SubpixelOffset, SubpixelOffset) {
let x = (self.0 >> 28) as u8 & 3;
let y = (self.0 >> 30) as u8 & 3;
unsafe {
(mem::transmute(x), mem::transmute(y))
}
}
}

View File

@ -194,9 +194,9 @@ impl GlyphRasterizer {
// TODO: pathfinder will need to support 2D subpixel offset
let pathfinder_subpixel_offset =
pathfinder_font_renderer::SubpixelOffset(glyph_key.subpixel_offset.0 as u8);
pathfinder_font_renderer::SubpixelOffset(glyph_key.subpixel_offset().0 as u8);
let pathfinder_glyph_key =
pathfinder_font_renderer::GlyphKey::new(glyph_key.index,
pathfinder_font_renderer::GlyphKey::new(glyph_key.index(),
pathfinder_subpixel_offset);
if let Ok(glyph_dimensions) =
@ -281,9 +281,9 @@ fn request_render_task_from_pathfinder(glyph_key: &GlyphKey,
// TODO: pathfinder will need to support 2D subpixel offset
let pathfinder_subpixel_offset =
pathfinder_font_renderer::SubpixelOffset(glyph_key.subpixel_offset.0 as u8);
let glyph_subpixel_offset: f64 = glyph_key.subpixel_offset.0.into();
let pathfinder_glyph_key = pathfinder_font_renderer::GlyphKey::new(glyph_key.index,
pathfinder_font_renderer::SubpixelOffset(glyph_key.subpixel_offset().0 as u8);
let glyph_subpixel_offset: f64 = glyph_key.subpixel_offset().0.into();
let pathfinder_glyph_key = pathfinder_font_renderer::GlyphKey::new(glyph_key.index(),
pathfinder_subpixel_offset);
// TODO(pcwalton): Fall back to CPU rendering if Pathfinder fails to collect the outline.

View File

@ -210,8 +210,11 @@ impl HitTester {
let node = &self.clip_nodes[node_index.0];
let transform = self.spatial_nodes[node.spatial_node.0].world_viewport_transform;
let transformed_point = match transform.inverse() {
Some(inverted) => inverted.transform_point2d(&point),
let transformed_point = match transform
.inverse()
.and_then(|inverted| inverted.transform_point2d(&point))
{
Some(point) => point,
None => {
test.node_cache.insert(node_index, ClippedIn::NotClippedIn);
return false;
@ -236,8 +239,11 @@ impl HitTester {
let spatial_node_index = clip_and_scroll.spatial_node_index;
let scroll_node = &self.spatial_nodes[spatial_node_index.0];
let transform = scroll_node.world_content_transform;
let point_in_layer = match transform.inverse() {
Some(inverted) => inverted.transform_point2d(&point),
let point_in_layer = match transform
.inverse()
.and_then(|inverted| inverted.transform_point2d(&point))
{
Some(point) => point,
None => continue,
};
@ -277,8 +283,11 @@ impl HitTester {
let transform = scroll_node.world_content_transform;
let mut facing_backwards: Option<bool> = None; // will be computed on first use
let point_in_layer = match transform.inverse() {
Some(inverted) => inverted.transform_point2d(&point),
let point_in_layer = match transform
.inverse()
.and_then(|inverted| inverted.transform_point2d(&point))
{
Some(point) => point,
None => continue,
};
@ -308,8 +317,11 @@ impl HitTester {
// in a situation with an uninvertible transformation so we should just skip this
// result.
let root_node = &self.spatial_nodes[self.pipeline_root_nodes[&pipeline_id].0];
let point_in_viewport = match root_node.world_viewport_transform.inverse() {
Some(inverted) => inverted.transform_point2d(&point),
let point_in_viewport = match root_node.world_viewport_transform
.inverse()
.and_then(|inverted| inverted.transform_point2d(&point))
{
Some(point) => point,
None => continue,
};
@ -411,8 +423,15 @@ impl HitTest {
}
let point = &LayoutPoint::new(self.point.x, self.point.y);
self.pipeline_id.map(|id|
hit_tester.get_pipeline_root(id).world_viewport_transform.transform_point2d(point)
).unwrap_or_else(|| WorldPoint::new(self.point.x, self.point.y))
self.pipeline_id
.and_then(|id|
hit_tester
.get_pipeline_root(id)
.world_viewport_transform
.transform_point2d(point)
)
.unwrap_or_else(|| {
WorldPoint::new(self.point.x, self.point.y)
})
}
}

View File

@ -2,8 +2,9 @@
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
use api::{TileOffset, LayoutRect, LayoutSize, LayoutPoint, DeviceUintSize};
use euclid::vec2;
use api::{TileOffset, TileRange, LayoutRect, LayoutSize, LayoutPoint};
use api::{DeviceUintSize, NormalizedRect};
use euclid::{vec2, point2};
use prim_store::EdgeAaSegmentMask;
/// If repetitions are far enough apart that only one is within
@ -225,6 +226,42 @@ pub fn for_each_tile(
}
}
pub fn compute_tile_range(
visible_area: &NormalizedRect,
image_size: &DeviceUintSize,
tile_size: u16,
) -> TileRange {
// Tile dimensions in normalized coordinates.
let tw = (image_size.width as f32) / (tile_size as f32);
let th = (image_size.height as f32) / (tile_size as f32);
let t0 = point2(
f32::floor(visible_area.origin.x * tw),
f32::floor(visible_area.origin.y * th),
).cast::<u16>();
let t1 = point2(
f32::ceil(visible_area.max_x() * tw),
f32::ceil(visible_area.max_y() * th),
).cast::<u16>();
TileRange {
origin: t0,
size: (t1 - t0).to_size(),
}
}
pub fn for_each_tile_in_range(
range: &TileRange,
callback: &mut FnMut(TileOffset),
) {
for y in 0..range.size.height {
for x in 0..range.size.width {
callback(range.origin + vec2(x, y));
}
}
}
#[cfg(test)]
mod tests {
use super::*;

View File

@ -79,7 +79,7 @@ pub struct PictureCacheKey {
// we relax that, we'll need to consider some
// extra parameters, depending on transform.
// This is a globally unique id of the scene this picture
// This is a globally unique id of the scene this picture
// is associated with, to avoid picture id collisions.
scene_id: u64,
@ -593,7 +593,16 @@ fn calculate_screen_uv(
rendered_rect: &DeviceRect,
device_pixel_scale: DevicePixelScale,
) -> DevicePoint {
let world_pos = transform.m.transform_point2d(local_pos);
let world_pos = match transform.m.transform_point2d(local_pos) {
Some(pos) => pos,
None => {
//Warning: this is incorrect and needs to be fixed properly.
// The transformation has put a local vertex behind the near clipping plane...
// Proper solution would be to keep the near-clipping-plane results around
// (currently produced by calculate_screen_bounding_rect) and use them here.
return DevicePoint::new(0.5, 0.5);
}
};
let mut device_pos = world_pos * device_pixel_scale;

View File

@ -361,7 +361,7 @@ impl FontContext {
) -> Option<GlyphDimensions> {
self.get_ct_font(font.font_key, font.size, &font.variations)
.and_then(|ref ct_font| {
let glyph = key.index as CGGlyph;
let glyph = key.index() as CGGlyph;
let bitmap = is_bitmap_font(ct_font);
let (x_offset, y_offset) = if bitmap { (0.0, 0.0) } else { font.get_subpx_offset(key) };
let transform = if font.synthetic_italics.is_enabled() ||
@ -525,7 +525,7 @@ impl FontContext {
None
};
let glyph = key.index as CGGlyph;
let glyph = key.index() as CGGlyph;
let (strike_scale, pixel_step) = if bitmap { (y_scale, 1.0) } else { (x_scale, y_scale / x_scale) };
let extra_strikes = font.get_extra_strikes(strike_scale / scale);
let metrics = get_glyph_metrics(

View File

@ -328,7 +328,7 @@ impl FontContext {
};
if succeeded(result) {
result = unsafe { FT_Load_Glyph(face.face, glyph.index as FT_UInt, load_flags as FT_Int32) };
result = unsafe { FT_Load_Glyph(face.face, glyph.index() as FT_UInt, load_flags as FT_Int32) };
};
if succeeded(result) {
@ -356,7 +356,7 @@ impl FontContext {
error!("Unable to load glyph");
debug!(
"{} of size {:?} from font {:?}, {:?}",
glyph.index,
glyph.index(),
font.size,
font.font_key,
result

View File

@ -189,7 +189,7 @@ impl FontContext {
bitmaps: bool,
) -> dwrote::GlyphRunAnalysis {
let face = self.get_font_face(font);
let glyph = key.index as u16;
let glyph = key.index() as u16;
let advance = 0.0f32;
let offset = dwrote::GlyphOffset {
advanceOffset: 0.0,
@ -284,7 +284,7 @@ impl FontContext {
}
let face = self.get_font_face(font);
face.get_design_glyph_metrics(&[key.index as u16], false)
face.get_design_glyph_metrics(&[key.index() as u16], false)
.first()
.map(|metrics| {
let em_size = size / 16.;

View File

@ -2653,9 +2653,8 @@ impl PrimitiveStore {
let scroll_node = &frame_context
.clip_scroll_tree
.spatial_nodes[run.clip_and_scroll.spatial_node_index.0];
let clip_chain = frame_context
.clip_scroll_tree
.get_clip_chain(run.clip_and_scroll.clip_chain_index);
let clip_chain = &frame_context
.clip_chains[run.clip_and_scroll.clip_chain_index.0];
// Mark whether this picture contains any complex coordinate
// systems, due to either the scroll node or the clip-chain.
@ -2756,20 +2755,27 @@ impl PrimitiveStore {
};
if let Some(ref matrix) = parent_relative_transform {
let bounds = matrix.transform_rect(&clipped_rect);
result.local_rect_in_actual_parent_space =
result.local_rect_in_actual_parent_space.union(&bounds);
match matrix.transform_rect(&clipped_rect) {
Some(bounds) => {
result.local_rect_in_actual_parent_space =
result.local_rect_in_actual_parent_space.union(&bounds);
}
None => {
warn!("parent relative transform can't transform the primitive rect for {:?}", prim_index);
}
}
}
if let Some(ref matrix) = original_relative_transform {
let bounds = matrix.transform_rect(&clipped_rect);
result.local_rect_in_original_parent_space =
result.local_rect_in_original_parent_space.union(&bounds);
}
if let Some(ref matrix) = parent_relative_transform {
let bounds = matrix.transform_rect(&prim_local_rect);
result.local_rect_in_actual_parent_space =
result.local_rect_in_actual_parent_space.union(&bounds);
match matrix.transform_rect(&clipped_rect) {
Some(bounds) => {
result.local_rect_in_original_parent_space =
result.local_rect_in_original_parent_space.union(&bounds);
}
None => {
warn!("original relative transform can't transform the primitive rect for {:?}", prim_index);
}
}
}
}
}

View File

@ -8,7 +8,7 @@ use api::{BuiltDisplayListIter, SpecificDisplayItem};
use api::{DeviceIntPoint, DevicePixelScale, DeviceUintPoint, DeviceUintRect, DeviceUintSize};
use api::{DocumentId, DocumentLayer, ExternalScrollId, FrameMsg, HitTestFlags, HitTestResult};
use api::{IdNamespace, LayoutPoint, PipelineId, RenderNotifier, SceneMsg, ScrollClamping};
use api::{ScrollLocation, ScrollNodeState, TransactionMsg};
use api::{ScrollLocation, ScrollNodeState, TransactionMsg, ResourceUpdate, ImageKey};
use api::channel::{MsgReceiver, Payload};
#[cfg(feature = "capture")]
use api::CaptureBits;
@ -226,10 +226,11 @@ impl Document {
fn forward_transaction_to_scene_builder(
&mut self,
transaction_msg: TransactionMsg,
blobs_to_rasterize: &[ImageKey],
document_ops: &DocumentOps,
document_id: DocumentId,
scene_id: u64,
resource_cache: &ResourceCache,
resource_cache: &mut ResourceCache,
scene_tx: &Sender<SceneBuilderRequest>,
) {
// Do as much of the error handling as possible here before dispatching to
@ -252,8 +253,14 @@ impl Document {
None
};
let (blob_rasterizer, blob_requests) = resource_cache.create_blob_scene_builder_requests(
blobs_to_rasterize
);
scene_tx.send(SceneBuilderRequest::Transaction {
scene: scene_request,
blob_requests,
blob_rasterizer,
resource_updates: transaction_msg.resource_updates,
frame_ops: transaction_msg.frame_ops,
render: transaction_msg.generate_frame,
@ -718,6 +725,8 @@ impl RenderBackend {
frame_ops,
render,
result_tx,
rasterized_blobs,
blob_rasterizer,
} => {
let mut ops = DocumentOps::nop();
if let Some(doc) = self.documents.get_mut(&document_id) {
@ -755,10 +764,16 @@ impl RenderBackend {
use_scene_builder_thread: false,
};
self.resource_cache.add_rasterized_blob_images(rasterized_blobs);
if let Some(rasterizer) = blob_rasterizer {
self.resource_cache.set_blob_rasterizer(rasterizer);
}
if !transaction_msg.is_empty() || ops.render {
self.update_document(
document_id,
transaction_msg,
&[],
&mut frame_counter,
&mut profile_counters,
ops,
@ -825,9 +840,15 @@ impl RenderBackend {
ApiMsg::FlushSceneBuilder(tx) => {
self.scene_tx.send(SceneBuilderRequest::Flush(tx)).unwrap();
}
ApiMsg::UpdateResources(updates) => {
self.resource_cache
.update_resources(updates, &mut profile_counters.resources);
ApiMsg::UpdateResources(mut updates) => {
self.resource_cache.pre_scene_building_update(
&mut updates,
&mut profile_counters.resources
);
self.resource_cache.post_scene_building_update(
updates,
&mut profile_counters.resources
);
}
ApiMsg::GetGlyphDimensions(instance_key, glyph_indices, tx) => {
let mut glyph_dimensions = Vec::with_capacity(glyph_indices.len());
@ -956,10 +977,18 @@ impl RenderBackend {
ApiMsg::ShutDown => {
return false;
}
ApiMsg::UpdateDocument(document_id, doc_msgs) => {
ApiMsg::UpdateDocument(document_id, mut doc_msgs) => {
let blob_requests = get_blob_image_updates(&doc_msgs.resource_updates);
self.resource_cache.pre_scene_building_update(
&mut doc_msgs.resource_updates,
&mut profile_counters.resources,
);
self.update_document(
document_id,
doc_msgs,
&blob_requests,
frame_counter,
profile_counters,
DocumentOps::nop(),
@ -975,6 +1004,7 @@ impl RenderBackend {
&mut self,
document_id: DocumentId,
mut transaction_msg: TransactionMsg,
blob_requests: &[ImageKey],
frame_counter: &mut u32,
profile_counters: &mut BackendProfileCounters,
initial_op: DocumentOps,
@ -982,6 +1012,10 @@ impl RenderBackend {
) {
let mut op = initial_op;
if !blob_requests.is_empty() {
transaction_msg.use_scene_builder_thread = true;
}
for scene_msg in transaction_msg.scene_ops.drain(..) {
let _timer = profile_counters.total_time.timer();
op.combine(
@ -1000,17 +1034,18 @@ impl RenderBackend {
doc.forward_transaction_to_scene_builder(
transaction_msg,
blob_requests,
&op,
document_id,
scene_id,
&self.resource_cache,
&mut self.resource_cache,
&self.scene_tx,
);
return;
}
self.resource_cache.update_resources(
self.resource_cache.post_scene_building_update(
transaction_msg.resource_updates,
&mut profile_counters.resources,
);
@ -1208,6 +1243,28 @@ impl RenderBackend {
}
}
fn get_blob_image_updates(updates: &[ResourceUpdate]) -> Vec<ImageKey> {
let mut requests = Vec::new();
for update in updates {
match *update {
ResourceUpdate::AddImage(ref img) => {
if img.data.is_blob() {
requests.push(img.key);
}
}
ResourceUpdate::UpdateImage(ref img) => {
if img.data.is_blob() {
requests.push(img.key);
}
}
_ => {}
}
}
requests
}
#[cfg(feature = "debugger")]
trait ToDebugString {
fn debug_string(&self) -> String;

View File

@ -9,7 +9,7 @@
//!
//! [renderer]: struct.Renderer.html
use api::{BlobImageRenderer, ColorF, DeviceIntPoint, DeviceIntRect, DeviceIntSize};
use api::{BlobImageHandler, ColorF, DeviceIntPoint, DeviceIntRect, DeviceIntSize};
use api::{DeviceUintPoint, DeviceUintRect, DeviceUintSize, DocumentId, Epoch, ExternalImageId};
use api::{ExternalImageType, FontRenderMode, FrameMsg, ImageFormat, PipelineId};
use api::{RenderApiSender, RenderNotifier, TexelRect, TextureTarget};
@ -1693,7 +1693,7 @@ impl Renderer {
let sampler = options.sampler;
let enable_render_on_scroll = options.enable_render_on_scroll;
let blob_image_renderer = options.blob_image_renderer.take();
let blob_image_handler = options.blob_image_handler.take();
let thread_listener_for_render_backend = thread_listener.clone();
let thread_listener_for_scene_builder = thread_listener.clone();
let scene_builder_hooks = options.scene_builder_hooks;
@ -1729,7 +1729,7 @@ impl Renderer {
let resource_cache = ResourceCache::new(
texture_cache,
glyph_rasterizer,
blob_image_renderer,
blob_image_handler,
);
let mut backend = RenderBackend::new(
@ -4095,7 +4095,7 @@ pub struct RendererOptions {
pub scatter_gpu_cache_updates: bool,
pub upload_method: UploadMethod,
pub workers: Option<Arc<ThreadPool>>,
pub blob_image_renderer: Option<Box<BlobImageRenderer>>,
pub blob_image_handler: Option<Box<BlobImageHandler>>,
pub recorder: Option<Box<ApiRecordingReceiver>>,
pub thread_listener: Option<Box<ThreadListener + Send + Sync>>,
pub enable_render_on_scroll: bool,
@ -4130,7 +4130,7 @@ impl Default for RendererOptions {
// but we are unable to make this decision here, so picking the reasonable medium.
upload_method: UploadMethod::PixelBuffer(VertexUsageHint::Stream),
workers: None,
blob_image_renderer: None,
blob_image_handler: None,
recorder: None,
thread_listener: None,
enable_render_on_scroll: true,

View File

@ -2,15 +2,15 @@
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
use api::{AddFont, BlobImageResources, ResourceUpdate};
use api::{BlobImageDescriptor, BlobImageError, BlobImageRenderer, BlobImageRequest};
use api::{AddFont, BlobImageResources, AsyncBlobImageRasterizer, ResourceUpdate};
use api::{BlobImageDescriptor, BlobImageHandler, BlobImageRequest};
use api::{ClearCache, ColorF, DevicePoint, DeviceUintPoint, DeviceUintRect, DeviceUintSize};
use api::{FontInstanceKey, FontKey, FontTemplate, GlyphIndex};
use api::{ExternalImageData, ExternalImageType};
use api::{ExternalImageData, ExternalImageType, BlobImageResult, BlobImageParams};
use api::{FontInstanceOptions, FontInstancePlatformOptions, FontVariation};
use api::{GlyphDimensions, IdNamespace};
use api::{ImageData, ImageDescriptor, ImageKey, ImageRendering};
use api::{TileOffset, TileSize};
use api::{TileOffset, TileSize, TileRange, NormalizedRect, BlobImageData};
use app_units::Au;
#[cfg(feature = "capture")]
use capture::ExternalCaptureImage;
@ -19,13 +19,14 @@ use capture::PlainExternalImage;
#[cfg(any(feature = "replay", feature = "png"))]
use capture::CaptureConfig;
use device::TextureFilter;
use euclid::size2;
use euclid::{point2, size2};
use glyph_cache::GlyphCache;
#[cfg(not(feature = "pathfinder"))]
use glyph_cache::GlyphCacheEntry;
use glyph_rasterizer::{FontInstance, GlyphFormat, GlyphKey, GlyphRasterizer};
use gpu_cache::{GpuCache, GpuCacheAddress, GpuCacheHandle};
use gpu_types::UvRectKind;
use image::{compute_tile_range, for_each_tile_in_range};
use internal_types::{FastHashMap, FastHashSet, SourceTexture, TextureUpdateList};
use profiler::{ResourceProfileCounters, TextureCacheProfileCounters};
use render_backend::FrameId;
@ -95,11 +96,25 @@ enum State {
QueryResources,
}
#[derive(Debug)]
/// Post scene building state.
struct RasterizedBlobImage {
data: FastHashMap<Option<TileOffset>, BlobImageResult>,
}
/// Pre scene building state.
/// We use this to generate the async blob rendering requests.
struct BlobImageTemplate {
descriptor: ImageDescriptor,
tiling: Option<TileSize>,
dirty_rect: Option<DeviceUintRect>,
viewport_tiles: Option<TileRange>,
}
struct ImageResource {
data: ImageData,
descriptor: ImageDescriptor,
tiling: Option<TileSize>,
viewport_tiles: Option<TileRange>,
}
#[derive(Clone, Debug)]
@ -360,14 +375,22 @@ pub struct ResourceCache {
// both blobs and regular images.
pending_image_requests: FastHashSet<ImageRequest>,
blob_image_renderer: Option<Box<BlobImageRenderer>>,
blob_image_handler: Option<Box<BlobImageHandler>>,
rasterized_blob_images: FastHashMap<ImageKey, RasterizedBlobImage>,
blob_image_templates: FastHashMap<ImageKey, BlobImageTemplate>,
// If while building a frame we encounter blobs that we didn't already
// rasterize, add them to this list and rasterize them synchronously.
missing_blob_images: Vec<BlobImageParams>,
// The rasterizer associated with the current scene.
blob_image_rasterizer: Option<Box<AsyncBlobImageRasterizer>>,
}
impl ResourceCache {
pub fn new(
texture_cache: TextureCache,
glyph_rasterizer: GlyphRasterizer,
blob_image_renderer: Option<Box<BlobImageRenderer>>,
blob_image_handler: Option<Box<BlobImageHandler>>,
) -> Self {
ResourceCache {
cached_glyphs: GlyphCache::new(),
@ -380,7 +403,11 @@ impl ResourceCache {
current_frame_id: FrameId(0),
pending_image_requests: FastHashSet::default(),
glyph_rasterizer,
blob_image_renderer,
blob_image_handler,
rasterized_blob_images: FastHashMap::default(),
blob_image_templates: FastHashMap::default(),
missing_blob_images: Vec::new(),
blob_image_rasterizer: None,
}
}
@ -425,7 +452,7 @@ impl ResourceCache {
).expect("Failed to request a render task from the resource cache!")
}
pub fn update_resources(
pub fn post_scene_building_update(
&mut self,
updates: Vec<ResourceUpdate>,
profile_counters: &mut ResourceProfileCounters,
@ -448,19 +475,78 @@ impl ResourceCache {
ResourceUpdate::DeleteImage(img) => {
self.delete_image_template(img);
}
ResourceUpdate::AddFont(font) => match font {
AddFont::Raw(id, bytes, index) => {
profile_counters.font_templates.inc(bytes.len());
self.add_font_template(id, FontTemplate::Raw(Arc::new(bytes), index));
}
AddFont::Native(id, native_font_handle) => {
self.add_font_template(id, FontTemplate::Native(native_font_handle));
}
},
ResourceUpdate::DeleteFont(font) => {
self.delete_font_template(font);
}
ResourceUpdate::AddFontInstance(instance) => {
ResourceUpdate::DeleteFontInstance(font) => {
self.delete_font_instance(font);
}
ResourceUpdate::SetImageVisibleArea(key, area) => {
self.discard_tiles_outside_visible_area(key, &area);
}
ResourceUpdate::AddFont(_) |
ResourceUpdate::AddFontInstance(_) => {
// Handled in update_resources_pre_scene_building
}
}
}
}
pub fn pre_scene_building_update(
&mut self,
updates: &mut Vec<ResourceUpdate>,
profile_counters: &mut ResourceProfileCounters,
) {
let mut new_updates = Vec::with_capacity(updates.len());
for update in mem::replace(updates, Vec::new()) {
match update {
ResourceUpdate::AddImage(ref img) => {
if let ImageData::Blob(ref blob_data) = img.data {
self.add_blob_image(
img.key,
&img.descriptor,
img.tiling,
Arc::clone(blob_data),
);
}
}
ResourceUpdate::UpdateImage(ref img) => {
if let ImageData::Blob(ref blob_data) = img.data {
self.update_blob_image(
img.key,
&img.descriptor,
&img.dirty_rect,
Arc::clone(blob_data)
);
}
}
ResourceUpdate::SetImageVisibleArea(key, area) => {
if let Some(template) = self.blob_image_templates.get_mut(&key) {
if let Some(tile_size) = template.tiling {
template.viewport_tiles = Some(compute_tile_range(
&area,
&template.descriptor.size,
tile_size,
));
}
}
}
_ => {}
}
match update {
ResourceUpdate::AddFont(font) => {
match font {
AddFont::Raw(id, bytes, index) => {
profile_counters.font_templates.inc(bytes.len());
self.add_font_template(id, FontTemplate::Raw(Arc::new(bytes), index));
}
AddFont::Native(id, native_font_handle) => {
self.add_font_template(id, FontTemplate::Native(native_font_handle));
}
}
}
ResourceUpdate::AddFontInstance(mut instance) => {
self.add_font_instance(
instance.key,
instance.font_key,
@ -470,11 +556,26 @@ impl ResourceCache {
instance.variations,
);
}
ResourceUpdate::DeleteFontInstance(instance) => {
self.delete_font_instance(instance);
other => {
new_updates.push(other);
}
}
}
*updates = new_updates;
}
pub fn set_blob_rasterizer(&mut self, rasterizer: Box<AsyncBlobImageRasterizer>) {
self.blob_image_rasterizer = Some(rasterizer);
}
pub fn add_rasterized_blob_images(&mut self, images: Vec<(BlobImageRequest, BlobImageResult)>) {
for (request, result) in images {
let image = self.rasterized_blob_images.entry(request.key).or_insert_with(
|| { RasterizedBlobImage { data: FastHashMap::default() } }
);
image.data.insert(request.tile, result);
}
}
pub fn add_font_template(&mut self, font_key: FontKey, template: FontTemplate) {
@ -489,7 +590,7 @@ impl ResourceCache {
self.resources.font_templates.remove(&font_key);
self.cached_glyphs
.clear_fonts(|font| font.font_key == font_key);
if let Some(ref mut r) = self.blob_image_renderer {
if let Some(ref mut r) = self.blob_image_handler {
r.delete_font(font_key);
}
}
@ -532,7 +633,7 @@ impl ResourceCache {
.write()
.unwrap()
.remove(&instance_key);
if let Some(ref mut r) = self.blob_image_renderer {
if let Some(ref mut r) = self.blob_image_handler {
r.delete_font_instance(instance_key);
}
}
@ -559,18 +660,11 @@ impl ResourceCache {
tiling = Some(DEFAULT_TILE_SIZE);
}
if let ImageData::Blob(ref blob) = data {
self.blob_image_renderer.as_mut().unwrap().add(
image_key,
Arc::clone(&blob),
tiling,
);
}
let resource = ImageResource {
descriptor,
data,
tiling,
viewport_tiles: None,
};
self.resources.image_templates.insert(image_key, resource);
@ -594,13 +688,6 @@ impl ResourceCache {
tiling = Some(DEFAULT_TILE_SIZE);
}
if let ImageData::Blob(ref blob) = data {
self.blob_image_renderer
.as_mut()
.unwrap()
.update(image_key, Arc::clone(blob), dirty_rect);
}
// Each cache entry stores its own copy of the image's dirty rect. This allows them to be
// updated independently.
match self.cached_images.try_get_mut(&image_key) {
@ -619,6 +706,66 @@ impl ResourceCache {
descriptor,
data,
tiling,
viewport_tiles: image.viewport_tiles,
};
}
// Happens before scene building.
pub fn add_blob_image(
&mut self,
key: ImageKey,
descriptor: &ImageDescriptor,
mut tiling: Option<TileSize>,
data: Arc<BlobImageData>,
) {
let max_texture_size = self.max_texture_size();
tiling = get_blob_tiling(tiling, descriptor, max_texture_size);
self.blob_image_handler.as_mut().unwrap().add(key, data, tiling);
self.blob_image_templates.insert(
key,
BlobImageTemplate {
descriptor: *descriptor,
tiling,
dirty_rect: Some(
DeviceUintRect::new(
DeviceUintPoint::zero(),
descriptor.size,
)
),
viewport_tiles: None,
},
);
}
// Happens before scene building.
pub fn update_blob_image(
&mut self,
key: ImageKey,
descriptor: &ImageDescriptor,
dirty_rect: &Option<DeviceUintRect>,
data: Arc<BlobImageData>,
) {
self.blob_image_handler.as_mut().unwrap().update(key, data, *dirty_rect);
let max_texture_size = self.max_texture_size();
let image = self.blob_image_templates
.get_mut(&key)
.expect("Attempt to update non-existent blob image");
let tiling = get_blob_tiling(image.tiling, descriptor, max_texture_size);
*image = BlobImageTemplate {
descriptor: *descriptor,
tiling,
dirty_rect: match (*dirty_rect, image.dirty_rect) {
(Some(rect), Some(prev_rect)) => Some(rect.union(&prev_rect)),
(Some(rect), None) => Some(rect),
(None, _) => None,
},
viewport_tiles: image.viewport_tiles,
};
}
@ -629,7 +776,9 @@ impl ResourceCache {
match value {
Some(image) => if image.data.is_blob() {
self.blob_image_renderer.as_mut().unwrap().delete(image_key);
self.blob_image_handler.as_mut().unwrap().delete(image_key);
self.blob_image_templates.remove(&image_key);
self.rasterized_blob_images.remove(&image_key);
},
None => {
warn!("Delete the non-exist key");
@ -726,73 +875,216 @@ impl ResourceCache {
ImageResult::Err(_) => panic!("Errors should already have been handled"),
};
let needs_upload = self.texture_cache
.request(&entry.texture_cache_handle, gpu_cache);
self.texture_cache.request(&entry.texture_cache_handle, gpu_cache);
let dirty_rect = if needs_upload {
// the texture cache entry has been evicted, treat it as all dirty
None
} else if entry.dirty_rect.is_none() {
return
} else {
entry.dirty_rect
};
self.pending_image_requests.insert(request);
if !self.pending_image_requests.insert(request) {
return
}
// If we are tiling, then we need to confirm the dirty rect intersects
// the tile before leaving the request in the pending queue.
//
// We can start a worker thread rasterizing right now, if:
// - The image is a blob.
// - The blob hasn't already been requested this frame.
if template.data.is_blob() || dirty_rect.is_some() {
let (offset, size) = match request.tile {
Some(tile_offset) => {
let tile_size = template.tiling.unwrap();
let actual_size = compute_tile_size(
&template.descriptor,
tile_size,
tile_offset,
);
if let Some(dirty) = dirty_rect {
if intersect_for_tile(dirty, actual_size, tile_size, tile_offset).is_none() {
// don't bother requesting unchanged tiles
entry.dirty_rect = None;
self.pending_image_requests.remove(&request);
return
}
}
let offset = DevicePoint::new(
tile_offset.x as f32 * tile_size as f32,
tile_offset.y as f32 * tile_size as f32,
);
(offset, actual_size)
}
None => (DevicePoint::zero(), template.descriptor.size),
if template.data.is_blob() {
let request: BlobImageRequest = request.into();
let missing = match self.rasterized_blob_images.get(&request.key) {
Some(img) => !img.data.contains_key(&request.tile),
None => true,
};
if template.data.is_blob() {
if let Some(ref mut renderer) = self.blob_image_renderer {
renderer.request(
&self.resources,
request.into(),
&BlobImageDescriptor {
size,
offset,
// For some reason the blob image is missing. We'll fall back to
// rasterizing it on the render backend thread.
if missing {
let descriptor = match template.tiling {
Some(tile_size) => {
let tile = request.tile.unwrap();
BlobImageDescriptor {
offset: DevicePoint::new(
tile.x as f32 * tile_size as f32,
tile.y as f32 * tile_size as f32,
),
size: compute_tile_size(
&template.descriptor,
tile_size,
tile,
),
format: template.descriptor.format,
},
dirty_rect,
);
}
}
}
None => {
BlobImageDescriptor {
offset: DevicePoint::origin(),
size: template.descriptor.size,
format: template.descriptor.format,
}
}
};
self.missing_blob_images.push(
BlobImageParams {
request,
descriptor,
dirty_rect: None,
}
);
}
}
}
pub fn create_blob_scene_builder_requests(
&mut self,
keys: &[ImageKey]
) -> (Option<Box<AsyncBlobImageRasterizer>>, Vec<BlobImageParams>) {
if self.blob_image_handler.is_none() {
return (None, Vec::new());
}
let mut blob_request_params = Vec::new();
for key in keys {
let template = self.blob_image_templates.get_mut(key).unwrap();
if let Some(tile_size) = template.tiling {
// If we know that only a portion of the blob image is in the viewport,
// only request these visible tiles since blob images can be huge.
let mut tiles = template.viewport_tiles.unwrap_or_else(|| {
// Default to requesting the full range of tiles.
compute_tile_range(
&NormalizedRect {
origin: point2(0.0, 0.0),
size: size2(1.0, 1.0),
},
&template.descriptor.size,
tile_size,
)
});
// Don't request tiles that weren't invalidated.
if let Some(dirty_rect) = template.dirty_rect {
let f32_size = template.descriptor.size.to_f32();
let normalized_dirty_rect = NormalizedRect {
origin: point2(
dirty_rect.origin.x as f32 / f32_size.width,
dirty_rect.origin.y as f32 / f32_size.height,
),
size: size2(
dirty_rect.size.width as f32 / f32_size.width,
dirty_rect.size.height as f32 / f32_size.height,
),
};
let dirty_tiles = compute_tile_range(
&normalized_dirty_rect,
&template.descriptor.size,
tile_size,
);
tiles = tiles.intersection(&dirty_tiles).unwrap_or(TileRange::zero());
}
// This code tries to keep things sane if Gecko sends
// nonsensical blob image requests.
// Constant here definitely needs to be tweaked.
const MAX_TILES_PER_REQUEST: u32 = 64;
while tiles.size.width as u32 * tiles.size.height as u32 > MAX_TILES_PER_REQUEST {
// Remove tiles in the largest dimension.
if tiles.size.width > tiles.size.height {
tiles.size.width -= 2;
tiles.origin.x += 1;
} else {
tiles.size.height -= 2;
tiles.origin.y += 1;
}
}
for_each_tile_in_range(&tiles, &mut|tile| {
let descriptor = BlobImageDescriptor {
offset: DevicePoint::new(
tile.x as f32 * tile_size as f32,
tile.y as f32 * tile_size as f32,
),
size: compute_tile_size(
&template.descriptor,
tile_size,
tile,
),
format: template.descriptor.format,
};
blob_request_params.push(
BlobImageParams {
request: BlobImageRequest {
key: *key,
tile: Some(tile),
},
descriptor,
dirty_rect: None,
}
);
});
} else {
// TODO: to support partial rendering of non-tiled blobs we
// need to know that the current version of the blob is uploaded
// to the texture cache and get the guarantee that it will not
// get evicted by the time the updated blob is rasterized and
// uploaded.
// Alternatively we could make it the responsibility of the blob
// renderer to always output the full image. This could be based
// a similar copy-on-write mechanism as gecko tiling.
blob_request_params.push(
BlobImageParams {
request: BlobImageRequest {
key: *key,
tile: None,
},
descriptor: BlobImageDescriptor {
offset: DevicePoint::zero(),
size: template.descriptor.size,
format: template.descriptor.format,
},
dirty_rect: None,
}
);
}
template.dirty_rect = None;
}
let handler = self.blob_image_handler.as_mut().unwrap();
handler.prepare_resources(&self.resources, &blob_request_params);
(Some(handler.create_blob_rasterizer()), blob_request_params)
}
fn discard_tiles_outside_visible_area(
&mut self,
key: ImageKey,
area: &NormalizedRect
) {
let template = match self.blob_image_templates.get(&key) {
Some(template) => template,
None => {
//println!("Missing image template (key={:?})!", key);
return;
}
};
let tile_size = match template.tiling {
Some(size) => size,
None => { return; }
};
let image = match self.rasterized_blob_images.get_mut(&key) {
Some(image) => image,
None => {
//println!("Missing rasterized blob (key={:?})!", key);
return;
}
};
let tile_range = compute_tile_range(
&area,
&template.descriptor.size,
tile_size,
);
image.data.retain(|tile, _| {
match *tile {
Some(offset) => tile_range.contains(&offset),
// This would be a bug. If we get here the blob should be tiled.
None => {
error!("Blob image template and image data tiling don't match.");
false
}
}
});
}
pub fn request_glyphs(
&mut self,
mut font: FontInstance,
@ -1021,6 +1313,8 @@ impl ResourceCache {
texture_cache_profile,
);
self.rasterize_missing_blob_images();
// Apply any updates of new / updated images (incl. blobs) to the texture cache.
self.update_texture_cache(gpu_cache);
render_tasks.prepare_for_render();
@ -1032,6 +1326,26 @@ impl ResourceCache {
self.texture_cache.end_frame(texture_cache_profile);
}
fn rasterize_missing_blob_images(&mut self) {
if self.missing_blob_images.is_empty() {
return;
}
self.blob_image_handler
.as_mut()
.unwrap()
.prepare_resources(&self.resources, &self.missing_blob_images);
let rasterized_blobs = self.blob_image_rasterizer
.as_mut()
.unwrap()
.rasterize(&self.missing_blob_images);
self.add_rasterized_blob_images(rasterized_blobs);
self.missing_blob_images.clear();
}
fn update_texture_cache(&mut self, gpu_cache: &mut GpuCache) {
for request in self.pending_image_requests.drain() {
let image_template = self.resources.image_templates.get_mut(request.key).unwrap();
@ -1044,28 +1358,20 @@ impl ResourceCache {
image_template.data.clone()
}
ImageData::Blob(..) => {
// Extract the rasterized image from the blob renderer.
match self.blob_image_renderer
.as_mut()
.unwrap()
.resolve(request.into())
{
Ok(image) => ImageData::new(image.data),
// TODO(nical): I think that we should handle these somewhat gracefully,
// at least in the out-of-memory scenario.
Err(BlobImageError::Oom) => {
// This one should be recoverable-ish.
panic!("Failed to render a vector image (OOM)");
let blob_image = self.rasterized_blob_images.get(&request.key).unwrap();
match blob_image.data.get(&request.tile) {
Some(result) => {
let result = result
.as_ref()
.expect("Failed to render a blob image");
// TODO: we may want to not panic and show a placeholder instead.
ImageData::Raw(Arc::clone(&result.data))
}
Err(BlobImageError::InvalidKey) => {
panic!("Invalid vector image key");
}
Err(BlobImageError::InvalidData) => {
// TODO(nical): If we run into this we should kill the content process.
panic!("Invalid vector image data");
}
Err(BlobImageError::Other(msg)) => {
panic!("Vector image error {}", msg);
None => {
debug_assert!(false, "invalid blob image request during frame building");
continue;
}
}
}
@ -1197,12 +1503,27 @@ impl ResourceCache {
self.cached_glyphs
.clear_fonts(|font| font.font_key.0 == namespace);
if let Some(ref mut r) = self.blob_image_renderer {
if let Some(ref mut r) = self.blob_image_handler {
r.clear_namespace(namespace);
}
}
}
pub fn get_blob_tiling(
tiling: Option<TileSize>,
descriptor: &ImageDescriptor,
max_texture_size: u32,
) -> Option<TileSize> {
if tiling.is_none() &&
(descriptor.size.width > max_texture_size ||
descriptor.size.height > max_texture_size) {
return Some(DEFAULT_TILE_SIZE);
}
tiling
}
// Compute the width and height of a tile depending on its position in the image.
pub fn compute_tile_size(
descriptor: &ImageDescriptor,
@ -1364,25 +1685,29 @@ impl ResourceCache {
}
ImageData::Blob(_) => {
assert_eq!(template.tiling, None);
let request = BlobImageRequest {
key,
//TODO: support tiled blob images
// https://github.com/servo/webrender/issues/2236
tile: None,
};
let renderer = self.blob_image_renderer.as_mut().unwrap();
renderer.request(
&self.resources,
request,
&BlobImageDescriptor {
size: desc.size,
offset: DevicePoint::zero(),
format: desc.format,
},
None,
);
let result = renderer.resolve(request)
.expect("Blob resolve failed");
let blob_request_params = &[
BlobImageParams {
request: BlobImageRequest {
key,
//TODO: support tiled blob images
// https://github.com/servo/webrender/issues/2236
tile: None,
},
descriptor: BlobImageDescriptor {
size: desc.size,
offset: DevicePoint::zero(),
format: desc.format,
},
dirty_rect: None,
}
];
let blob_handler = self.blob_image_handler.as_mut().unwrap();
blob_handler.prepare_resources(&self.resources, blob_request_params);
let mut rasterizer = blob_handler.create_blob_rasterizer();
let (_, result) = rasterizer.rasterize(blob_request_params).pop().unwrap();
let result = result.expect("Blob rasterization failed");
assert_eq!(result.size, desc.size);
assert_eq!(result.data.len(), desc.compute_total_size() as usize);
@ -1567,6 +1892,7 @@ impl ResourceCache {
data,
descriptor: template.descriptor,
tiling: template.tiling,
viewport_tiles: None,
});
}

View File

@ -2,6 +2,7 @@
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
use api::{AsyncBlobImageRasterizer, BlobImageRequest, BlobImageParams, BlobImageResult};
use api::{DocumentId, PipelineId, ApiMsg, FrameMsg, ResourceUpdate};
use api::channel::MsgSender;
use display_list_flattener::build_scene;
@ -20,6 +21,8 @@ pub enum SceneBuilderRequest {
Transaction {
document_id: DocumentId,
scene: Option<SceneRequest>,
blob_requests: Vec<BlobImageParams>,
blob_rasterizer: Option<Box<AsyncBlobImageRasterizer>>,
resource_updates: Vec<ResourceUpdate>,
frame_ops: Vec<FrameMsg>,
render: bool,
@ -35,6 +38,8 @@ pub enum SceneBuilderResult {
document_id: DocumentId,
built_scene: Option<BuiltScene>,
resource_updates: Vec<ResourceUpdate>,
rasterized_blobs: Vec<(BlobImageRequest, BlobImageResult)>,
blob_rasterizer: Option<Box<AsyncBlobImageRasterizer>>,
frame_ops: Vec<FrameMsg>,
render: bool,
result_tx: Option<Sender<SceneSwapResult>>,
@ -134,6 +139,8 @@ impl SceneBuilder {
SceneBuilderRequest::Transaction {
document_id,
scene,
blob_requests,
mut blob_rasterizer,
resource_updates,
frame_ops,
render,
@ -143,7 +150,10 @@ impl SceneBuilder {
build_scene(&self.config, request)
});
// TODO: pre-rasterization.
let rasterized_blobs = blob_rasterizer.as_mut().map_or(
Vec::new(),
|rasterizer| rasterizer.rasterize(&blob_requests),
);
// We only need the pipeline info and the result channel if we
// have a hook callback *and* if this transaction actually built
@ -172,6 +182,8 @@ impl SceneBuilder {
document_id,
built_scene,
resource_updates,
rasterized_blobs,
blob_rasterizer,
frame_ops,
render,
result_tx,

View File

@ -33,15 +33,6 @@ pub enum SpatialNodeType {
Empty,
}
impl SpatialNodeType {
fn is_reference_frame(&self) -> bool {
match *self {
SpatialNodeType::ReferenceFrame(_) => true,
_ => false,
}
}
}
/// Contains information common among all types of ClipScrollTree nodes.
#[derive(Clone, Debug)]
pub struct SpatialNode {
@ -282,98 +273,82 @@ impl SpatialNode {
next_coordinate_system_id: &mut CoordinateSystemId,
scene_properties: &SceneProperties,
) {
if self.node_type.is_reference_frame() {
self.update_transform_for_reference_frame(
state,
next_coordinate_system_id,
scene_properties
);
return;
}
match self.node_type {
SpatialNodeType::ReferenceFrame(ref mut info) => {
// Resolve the transform against any property bindings.
let source_transform = scene_properties.resolve_layout_transform(&info.source_transform);
info.resolved_transform =
LayoutFastTransform::with_vector(info.origin_in_parent_reference_frame)
.pre_mul(&source_transform.into())
.pre_mul(&info.source_perspective);
// We calculate this here to avoid a double-borrow later.
let sticky_offset = self.calculate_sticky_offset(
&state.nearest_scrolling_ancestor_offset,
&state.nearest_scrolling_ancestor_viewport,
);
// The transformation for this viewport in world coordinates is the transformation for
// our parent reference frame, plus any accumulated scrolling offsets from nodes
// between our reference frame and this node. Finally, we also include
// whatever local transformation this reference frame provides.
let relative_transform = info.resolved_transform
.post_translate(state.parent_accumulated_scroll_offset)
.to_transform()
.with_destination::<LayoutPixel>();
self.world_viewport_transform =
state.parent_reference_frame_transform.pre_mul(&relative_transform.into());
self.world_content_transform = self.world_viewport_transform;
// The transformation for the bounds of our viewport is the parent reference frame
// transform, plus any accumulated scroll offset from our parents, plus any offset
// provided by our own sticky positioning.
let accumulated_offset = state.parent_accumulated_scroll_offset + sticky_offset;
self.world_viewport_transform = if accumulated_offset != LayoutVector2D::zero() {
state.parent_reference_frame_transform.pre_translate(&accumulated_offset)
} else {
state.parent_reference_frame_transform
};
info.invertible = self.world_viewport_transform.is_invertible();
if !info.invertible {
return;
}
// The transformation for any content inside of us is the viewport transformation, plus
// whatever scrolling offset we supply as well.
let scroll_offset = self.scroll_offset();
self.world_content_transform = if scroll_offset != LayoutVector2D::zero() {
self.world_viewport_transform.pre_translate(&scroll_offset)
} else {
self.world_viewport_transform
};
// Try to update our compatible coordinate system transform. If we cannot, start a new
// incompatible coordinate system.
match state.coordinate_system_relative_transform.update(relative_transform) {
Some(offset) => self.coordinate_system_relative_transform = offset,
None => {
self.coordinate_system_relative_transform = LayoutFastTransform::identity();
state.current_coordinate_system_id = *next_coordinate_system_id;
next_coordinate_system_id.advance();
}
}
let added_offset = state.parent_accumulated_scroll_offset + sticky_offset + scroll_offset;
self.coordinate_system_relative_transform =
state.coordinate_system_relative_transform.offset(added_offset);
self.coordinate_system_id = state.current_coordinate_system_id;
}
_ => {
// We calculate this here to avoid a double-borrow later.
let sticky_offset = self.calculate_sticky_offset(
&state.nearest_scrolling_ancestor_offset,
&state.nearest_scrolling_ancestor_viewport,
);
if let SpatialNodeType::StickyFrame(ref mut info) = self.node_type {
info.current_offset = sticky_offset;
}
// The transformation for the bounds of our viewport is the parent reference frame
// transform, plus any accumulated scroll offset from our parents, plus any offset
// provided by our own sticky positioning.
let accumulated_offset = state.parent_accumulated_scroll_offset + sticky_offset;
self.world_viewport_transform = if accumulated_offset != LayoutVector2D::zero() {
state.parent_reference_frame_transform.pre_translate(&accumulated_offset)
} else {
state.parent_reference_frame_transform
};
self.coordinate_system_id = state.current_coordinate_system_id;
}
// The transformation for any content inside of us is the viewport transformation, plus
// whatever scrolling offset we supply as well.
let scroll_offset = self.scroll_offset();
self.world_content_transform = if scroll_offset != LayoutVector2D::zero() {
self.world_viewport_transform.pre_translate(&scroll_offset)
} else {
self.world_viewport_transform
};
pub fn update_transform_for_reference_frame(
&mut self,
state: &mut TransformUpdateState,
next_coordinate_system_id: &mut CoordinateSystemId,
scene_properties: &SceneProperties,
) {
let info = match self.node_type {
SpatialNodeType::ReferenceFrame(ref mut info) => info,
_ => unreachable!("Called update_transform_for_reference_frame on non-ReferenceFrame"),
};
let added_offset = state.parent_accumulated_scroll_offset + sticky_offset + scroll_offset;
self.coordinate_system_relative_transform =
state.coordinate_system_relative_transform.offset(added_offset);
// Resolve the transform against any property bindings.
let source_transform = scene_properties.resolve_layout_transform(&info.source_transform);
info.resolved_transform =
LayoutFastTransform::with_vector(info.origin_in_parent_reference_frame)
.pre_mul(&source_transform.into())
.pre_mul(&info.source_perspective);
if let SpatialNodeType::StickyFrame(ref mut info) = self.node_type {
info.current_offset = sticky_offset;
}
// The transformation for this viewport in world coordinates is the transformation for
// our parent reference frame, plus any accumulated scrolling offsets from nodes
// between our reference frame and this node. Finally, we also include
// whatever local transformation this reference frame provides.
let relative_transform = info.resolved_transform
.post_translate(state.parent_accumulated_scroll_offset)
.to_transform()
.with_destination::<LayoutPixel>();
self.world_viewport_transform =
state.parent_reference_frame_transform.pre_mul(&relative_transform.into());
self.world_content_transform = self.world_viewport_transform;
info.invertible = self.world_viewport_transform.is_invertible();
if !info.invertible {
return;
}
// Try to update our compatible coordinate system transform. If we cannot, start a new
// incompatible coordinate system.
match state.coordinate_system_relative_transform.update(relative_transform) {
Some(offset) => self.coordinate_system_relative_transform = offset,
None => {
self.coordinate_system_relative_transform = LayoutFastTransform::identity();
state.current_coordinate_system_id = *next_coordinate_system_id;
next_coordinate_system_id.advance();
self.coordinate_system_id = state.current_coordinate_system_id;
}
}
self.coordinate_system_id = state.current_coordinate_system_id;
}
fn calculate_sticky_offset(

View File

@ -7,7 +7,7 @@ use api::{DeviceUintRect, DeviceUintSize, DocumentLayer, FilterOp, ImageFormat,
use api::{MixBlendMode, PipelineId};
use batch::{AlphaBatchBuilder, AlphaBatchContainer, ClipBatcher, resolve_image};
use clip::{ClipStore};
use clip_scroll_tree::{ClipScrollTree, SpatialNodeIndex};
use clip_scroll_tree::SpatialNodeIndex;
use device::{FrameId, Texture};
#[cfg(feature = "pathfinder")]
use euclid::{TypedPoint2D, TypedVector2D};
@ -45,7 +45,6 @@ pub struct RenderTargetContext<'a, 'rc> {
pub device_pixel_scale: DevicePixelScale,
pub prim_store: &'a PrimitiveStore,
pub resource_cache: &'rc mut ResourceCache,
pub clip_scroll_tree: &'a ClipScrollTree,
pub use_dual_source_blending: bool,
pub transforms: &'a TransformPalette,
}

View File

@ -4,7 +4,7 @@
use api::{BorderRadius, DeviceIntPoint, DeviceIntRect, DeviceIntSize, DevicePixelScale};
use api::{DevicePoint, DeviceRect, DeviceSize, LayoutPixel, LayoutPoint, LayoutRect, LayoutSize};
use api::{WorldPixel, WorldRect};
use api::{WorldPixel, WorldPoint, WorldRect};
use euclid::{Point2D, Rect, Size2D, TypedPoint2D, TypedRect, TypedSize2D};
use euclid::{TypedTransform2D, TypedTransform3D, TypedVector2D, TypedVector3D};
use euclid::{HomogeneousVector};
@ -262,15 +262,19 @@ pub fn calculate_screen_bounding_rect(
transform.transform_point2d_homogeneous(&p.to_2d()),
transform.transform_point2d(&p.to_2d())
);
transform.transform_point2d(&p.to_2d())
//TODO: change to `expect` when the near splitting code is ready
transform
.transform_point2d(&p.to_2d())
.unwrap_or(WorldPoint::zero())
})
)
} else {
// we just checked for all the points to be in positive hemisphere, so `unwrap` is valid
WorldRect::from_points(&[
homogens[0].to_point2d(),
homogens[1].to_point2d(),
homogens[2].to_point2d(),
homogens[3].to_point2d(),
homogens[0].to_point2d().unwrap(),
homogens[1].to_point2d().unwrap(),
homogens[2].to_point2d().unwrap(),
homogens[3].to_point2d().unwrap(),
])
};
@ -550,11 +554,11 @@ impl<Src, Dst> FastTransform<Src, Dst> {
}
#[inline(always)]
pub fn transform_point2d(&self, point: &TypedPoint2D<f32, Src>) -> TypedPoint2D<f32, Dst> {
pub fn transform_point2d(&self, point: &TypedPoint2D<f32, Src>) -> Option<TypedPoint2D<f32, Dst>> {
match *self {
FastTransform::Offset(offset) => {
let new_point = *point + offset;
TypedPoint2D::from_untyped(&new_point.to_untyped())
Some(TypedPoint2D::from_untyped(&new_point.to_untyped()))
}
FastTransform::Transform { ref transform, .. } => transform.transform_point2d(point),
}
@ -572,10 +576,10 @@ impl<Src, Dst> FastTransform<Src, Dst> {
}
#[inline(always)]
pub fn transform_rect(&self, rect: &TypedRect<f32, Src>) -> TypedRect<f32, Dst> {
pub fn transform_rect(&self, rect: &TypedRect<f32, Src>) -> Option<TypedRect<f32, Dst>> {
match *self {
FastTransform::Offset(offset) =>
TypedRect::from_untyped(&rect.to_untyped().translate(&offset.to_untyped())),
Some(TypedRect::from_untyped(&rect.to_untyped().translate(&offset.to_untyped()))),
FastTransform::Transform { ref transform, .. } => transform.transform_rect(rect),
}
}
@ -585,7 +589,7 @@ impl<Src, Dst> FastTransform<Src, Dst> {
FastTransform::Offset(offset) =>
Some(TypedRect::from_untyped(&rect.to_untyped().translate(&-offset.to_untyped()))),
FastTransform::Transform { inverse: Some(ref inverse), is_2d: true, .. } =>
Some(inverse.transform_rect(rect)),
inverse.transform_rect(rect),
FastTransform::Transform { ref transform, is_2d: false, .. } =>
Some(transform.inverse_rect_footprint(rect)),
FastTransform::Transform { inverse: None, .. } => None,

View File

@ -17,7 +17,7 @@ bincode = "1.0"
bitflags = "1.0"
byteorder = "1.2.1"
ipc-channel = {version = "0.10.0", optional = true}
euclid = { version = "0.17", features = ["serde"] }
euclid = { version = "0.18", features = ["serde"] }
serde = { version = "=1.0.66", features = ["rc"] }
serde_derive = { version = "=1.0.66", features = ["deserialize_in_place"] }
serde_bytes = "0.10"

View File

@ -15,7 +15,7 @@ use {BuiltDisplayList, BuiltDisplayListDescriptor, ColorF, DeviceIntPoint, Devic
use {DeviceUintSize, ExternalScrollId, FontInstanceKey, FontInstanceOptions};
use {FontInstancePlatformOptions, FontKey, FontVariation, GlyphDimensions, GlyphIndex, ImageData};
use {ImageDescriptor, ImageKey, ItemTag, LayoutPoint, LayoutSize, LayoutTransform, LayoutVector2D};
use {NativeFontHandle, WorldPoint};
use {NativeFontHandle, WorldPoint, NormalizedRect};
pub type TileSize = u16;
/// Documents are rendered in the ascending order of their associated layer values.
@ -26,6 +26,7 @@ pub enum ResourceUpdate {
AddImage(AddImage),
UpdateImage(UpdateImage),
DeleteImage(ImageKey),
SetImageVisibleArea(ImageKey, NormalizedRect),
AddFont(AddFont),
DeleteFont(FontKey),
AddFontInstance(AddFontInstance),
@ -294,6 +295,10 @@ impl Transaction {
self.resource_updates.push(ResourceUpdate::DeleteImage(key));
}
pub fn set_image_visible_area(&mut self, key: ImageKey, area: NormalizedRect) {
self.resource_updates.push(ResourceUpdate::SetImageVisibleArea(key, area))
}
pub fn add_raw_font(&mut self, key: FontKey, bytes: Vec<u8>, index: u32) {
self.resource_updates
.push(ResourceUpdate::AddFont(AddFont::Raw(key, bytes, index)));

View File

@ -174,35 +174,64 @@ impl ImageData {
}
}
/// The resources exposed by the resource cache available for use by the blob rasterizer.
pub trait BlobImageResources {
fn get_font_data(&self, key: FontKey) -> &FontTemplate;
fn get_image(&self, key: ImageKey) -> Option<(&ImageData, &ImageDescriptor)>;
}
pub trait BlobImageRenderer: Send {
fn add(&mut self, key: ImageKey, data: Arc<BlobImageData>, tiling: Option<TileSize>);
/// A handler on the render backend that can create rasterizer objects which will
/// be sent to the scene builder thread to execute the rasterization.
///
/// The handler is responsible for collecting resources, managing/updating blob commands
/// and creating the rasterizer objects, but isn't expected to do any rasterization itself.
pub trait BlobImageHandler: Send {
/// Creates a snapshot of the current state of blob images in the handler.
fn create_blob_rasterizer(&mut self) -> Box<AsyncBlobImageRasterizer>;
fn update(&mut self, key: ImageKey, data: Arc<BlobImageData>, dirty_rect: Option<DeviceUintRect>);
fn delete(&mut self, key: ImageKey);
fn request(
/// A hook to let the blob image handler update any state related to resources that
/// are not bundled in the blob recording itself.
fn prepare_resources(
&mut self,
resources: &BlobImageResources,
key: BlobImageRequest,
descriptor: &BlobImageDescriptor,
dirty_rect: Option<DeviceUintRect>,
services: &BlobImageResources,
requests: &[BlobImageParams],
);
fn resolve(&mut self, key: BlobImageRequest) -> BlobImageResult;
/// Register a blob image.
fn add(&mut self, key: ImageKey, data: Arc<BlobImageData>, tiling: Option<TileSize>);
/// Update an already registered blob image.
fn update(&mut self, key: ImageKey, data: Arc<BlobImageData>, dirty_rect: Option<DeviceUintRect>);
/// Delete an already registered blob image.
fn delete(&mut self, key: ImageKey);
/// A hook to let the handler clean up any state related to a font which the resource
/// cache is about to delete.
fn delete_font(&mut self, key: FontKey);
/// A hook to let the handler clean up any state related to a font instance which the
/// resource cache is about to delete.
fn delete_font_instance(&mut self, key: FontInstanceKey);
/// A hook to let the handler clean up any state related a given namespace before the
/// resource cache deletes them.
fn clear_namespace(&mut self, namespace: IdNamespace);
}
/// A group of rasterization requests to execute synchronously on the scene builder thread.
pub trait AsyncBlobImageRasterizer : Send {
fn rasterize(&mut self, requests: &[BlobImageParams]) -> Vec<(BlobImageRequest, BlobImageResult)>;
}
#[derive(Copy, Clone, Debug)]
pub struct BlobImageParams {
pub request: BlobImageRequest,
pub descriptor: BlobImageDescriptor,
pub dirty_rect: Option<DeviceUintRect>,
}
pub type BlobImageData = Vec<u8>;
pub type BlobImageResult = Result<RasterizedBlobImage, BlobImageError>;
@ -217,7 +246,7 @@ pub struct BlobImageDescriptor {
pub struct RasterizedBlobImage {
pub size: DeviceUintSize,
pub data: Vec<u8>,
pub data: Arc<Vec<u8>>,
}
#[derive(Clone, Debug)]
@ -228,7 +257,7 @@ pub enum BlobImageError {
Other(String),
}
#[derive(Copy, Clone, Debug, PartialEq, Eq, Hash)]
#[derive(Copy, Clone, Debug, PartialEq, Eq, Hash, Serialize, Deserialize)]
pub struct BlobImageRequest {
pub key: ImageKey,
pub tile: Option<TileOffset>,

View File

@ -86,6 +86,7 @@ pub type WorldVector3D = TypedVector3D<f32, WorldPixel>;
#[derive(Hash, Clone, Copy, Debug, Eq, PartialEq, Ord, PartialOrd)]
pub struct Tiles;
pub type TileOffset = TypedPoint2D<u16, Tiles>;
pub type TileRange = TypedRect<u16, Tiles>;
/// Scaling ratio from world pixels to device pixels.
pub type DevicePixelScale = TypedScale<f32, WorldPixel, DevicePixel>;
@ -115,6 +116,12 @@ pub fn as_scroll_parent_vector(vector: &LayoutVector2D) -> ScrollLayerVector2D {
ScrollLayerVector2D::from_untyped(&vector.to_untyped())
}
/// Coordinates in normalized space (between zero and one).
#[derive(Hash, Clone, Copy, Debug, Eq, PartialEq, Ord, PartialOrd)]
pub struct NormalizedCoordinates;
pub type NormalizedRect = TypedRect<f32, NormalizedCoordinates>;
/// Stores two coordinates in texel space. The coordinates
/// are stored in texel coordinates because the texture atlas
/// may grow. Storing them as texel coords and normalizing

View File

@ -7,9 +7,9 @@ license = "MPL-2.0"
[dependencies]
rayon = "1"
thread_profiler = "0.1.1"
euclid = { version = "0.17", features = ["serde"] }
euclid = { version = "0.18", features = ["serde"] }
app_units = "0.6"
gleam = "0.5"
gleam = "0.6"
log = "0.4"
nsstring = { path = "../../servo/support/gecko/nsstring" }
bincode = "1.0"

View File

@ -1 +1 @@
88dab3f611b05516c1c54a7cb35813b796b08584
9f21ee5dba0694818a1e2e46d95734ede281447c

View File

@ -10,8 +10,8 @@ base64 = "0.6"
bincode = "1.0"
byteorder = "1.0"
env_logger = { version = "0.5", optional = true }
euclid = "0.17"
gleam = "0.5"
euclid = "0.18"
gleam = "0.6"
glutin = "0.17"
app_units = "0.6"
image = "0.19"

View File

@ -4,7 +4,7 @@
use glutin::{self, ContextBuilder, CreationError};
#[cfg(not(windows))]
use glutin::dpi::PhysicalSize;
use winit::dpi::PhysicalSize;
use winit::{EventsLoop, Window, WindowBuilder};
#[cfg(not(windows))]

View File

@ -2,7 +2,7 @@
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
// A very basic BlobImageRenderer that can only render a checkerboard pattern.
// A very basic BlobImageRasterizer that can only render a checkerboard pattern.
use std::collections::HashMap;
use std::sync::Arc;
@ -65,7 +65,6 @@ fn render_blob(
.expect("empty rects should be culled by webrender");
}
for y in dirty_rect.min_y() .. dirty_rect.max_y() {
for x in dirty_rect.min_x() .. dirty_rect.max_x() {
// Apply the tile's offset. This is important: all drawing commands should be
@ -103,28 +102,26 @@ fn render_blob(
}
Ok(RasterizedBlobImage {
data: texels,
data: Arc::new(texels),
size: descriptor.size,
})
}
/// See rawtest.rs. We use this to test that blob images are requested the right
/// amount of times.
pub struct BlobCallbacks {
pub request: Box<Fn(&BlobImageRequest) + Send + 'static>,
pub resolve: Box<Fn() + Send + 'static>,
pub request: Box<Fn(&[BlobImageParams]) + Send + 'static>,
}
impl BlobCallbacks {
pub fn new() -> Self {
BlobCallbacks { request: Box::new(|_|()), resolve: Box::new(|| (())) }
BlobCallbacks { request: Box::new(|_|()) }
}
}
pub struct CheckerboardRenderer {
image_cmds: HashMap<ImageKey, (ColorU, Option<TileSize>)>,
callbacks: Arc<Mutex<BlobCallbacks>>,
// The images rendered in the current frame (not kept here between frames).
rendered_images: HashMap<BlobImageRequest, BlobImageResult>,
}
impl CheckerboardRenderer {
@ -132,12 +129,11 @@ impl CheckerboardRenderer {
CheckerboardRenderer {
callbacks,
image_cmds: HashMap::new(),
rendered_images: HashMap::new(),
}
}
}
impl BlobImageRenderer for CheckerboardRenderer {
impl BlobImageHandler for CheckerboardRenderer {
fn add(&mut self, key: ImageKey, cmds: Arc<BlobImageData>, tile_size: Option<TileSize>) {
self.image_cmds
.insert(key, (deserialize_blob(&cmds[..]).unwrap(), tile_size));
@ -153,37 +149,59 @@ impl BlobImageRenderer for CheckerboardRenderer {
self.image_cmds.remove(&key);
}
fn request(
&mut self,
_resources: &BlobImageResources,
request: BlobImageRequest,
descriptor: &BlobImageDescriptor,
dirty_rect: Option<DeviceUintRect>,
) {
(self.callbacks.lock().unwrap().request)(&request);
assert!(!self.rendered_images.contains_key(&request));
// This method is where we kick off our rendering jobs.
// It should avoid doing work on the calling thread as much as possible.
// In this example we will use the thread pool to render individual tiles.
// Gather the input data to send to a worker thread.
let &(color, tile_size) = self.image_cmds.get(&request.key).unwrap();
let tile = request.tile.map(|tile| (tile_size.unwrap(), tile));
let result = render_blob(color, descriptor, tile, dirty_rect);
self.rendered_images.insert(request, result);
}
fn resolve(&mut self, request: BlobImageRequest) -> BlobImageResult {
(self.callbacks.lock().unwrap().resolve)();
self.rendered_images.remove(&request).unwrap()
}
fn delete_font(&mut self, _key: FontKey) {}
fn delete_font_instance(&mut self, _key: FontInstanceKey) {}
fn clear_namespace(&mut self, _namespace: IdNamespace) {}
fn prepare_resources(
&mut self,
_services: &BlobImageResources,
requests: &[BlobImageParams],
) {
if !requests.is_empty() {
(self.callbacks.lock().unwrap().request)(&requests);
}
}
fn create_blob_rasterizer(&mut self) -> Box<AsyncBlobImageRasterizer> {
Box::new(Rasterizer { image_cmds: self.image_cmds.clone() })
}
}
struct Command {
request: BlobImageRequest,
color: ColorU,
descriptor: BlobImageDescriptor,
tile: Option<(TileSize, TileOffset)>,
dirty_rect: Option<DeviceUintRect>
}
struct Rasterizer {
image_cmds: HashMap<ImageKey, (ColorU, Option<TileSize>)>,
}
impl AsyncBlobImageRasterizer for Rasterizer {
fn rasterize(&mut self, requests: &[BlobImageParams]) -> Vec<(BlobImageRequest, BlobImageResult)> {
let requests: Vec<Command> = requests.into_iter().map(
|item| {
let (color, tile_size) = self.image_cmds[&item.request.key];
let tile = item.request.tile.map(|tile| (tile_size.unwrap(), tile));
Command {
request: item.request,
color,
tile,
descriptor: item.descriptor,
dirty_rect: item.dirty_rect,
}
}
).collect();
requests.iter().map(|cmd| {
(cmd.request, render_blob(cmd.color, &cmd.descriptor, cmd.tile, cmd.dirty_rect))
}).collect()
}
}

View File

@ -15,7 +15,7 @@ use glutin::PixelFormatRequirements;
use glutin::ReleaseBehavior;
use glutin::Robustness;
use glutin::Api;
use glutin::dpi::PhysicalSize;
use winit::dpi::PhysicalSize;
use std::ffi::{CStr, CString};
use std::os::raw::c_int;

View File

@ -177,6 +177,7 @@ impl JsonFrameWriter {
);
}
ResourceUpdate::DeleteFontInstance(_) => {}
ResourceUpdate::SetImageVisibleArea(..) => {}
}
}
}

View File

@ -63,7 +63,6 @@ mod cgfont_to_data;
use binary_frame_reader::BinaryFrameReader;
use gleam::gl;
use glutin::GlContext;
use glutin::dpi::{LogicalPosition, LogicalSize};
use perf::PerfHarness;
use png::save_flipped;
use rawtest::RawtestHarness;
@ -80,6 +79,7 @@ use std::rc::Rc;
use std::sync::mpsc::{channel, Sender, Receiver};
use webrender::DebugFlags;
use webrender::api::*;
use winit::dpi::{LogicalPosition, LogicalSize};
use winit::VirtualKeyCode;
use wrench::{Wrench, WrenchThing};
use yaml_frame_reader::YamlFrameReader;

View File

@ -4,7 +4,7 @@
use {WindowWrapper, NotifierEvent};
use blob;
use euclid::{TypedRect, TypedSize2D, TypedPoint2D};
use euclid::{TypedRect, TypedSize2D, TypedPoint2D, point2, size2};
use std::sync::Arc;
use std::sync::atomic::{AtomicIsize, Ordering};
use std::sync::mpsc::Receiver;
@ -47,6 +47,7 @@ impl<'a> RawtestHarness<'a> {
self.test_blob_update_epoch_test();
self.test_tile_decomposition();
self.test_very_large_blob();
self.test_insufficient_blob_visible_area();
self.test_offscreen_blob();
self.test_save_restore();
self.test_blur_cache();
@ -180,6 +181,14 @@ impl<'a> RawtestHarness<'a> {
AlphaType::PremultipliedAlpha,
blob_img,
);
txn.set_image_visible_area(
blob_img,
NormalizedRect {
origin: point2(0.0, 0.03),
size: size2(1.0, 0.03),
}
);
builder.pop_clip_id();
let mut epoch = Epoch(0);
@ -189,7 +198,7 @@ impl<'a> RawtestHarness<'a> {
let pixels = self.render_and_get_pixels(window_rect);
// make sure we didn't request too many blobs
assert_eq!(called.load(Ordering::SeqCst), 16);
assert!(called.load(Ordering::SeqCst) < 20);
// make sure things are in the right spot
assert!(
@ -230,6 +239,99 @@ impl<'a> RawtestHarness<'a> {
*self.wrench.callbacks.lock().unwrap() = blob::BlobCallbacks::new();
}
fn test_insufficient_blob_visible_area(&mut self) {
println!("\tinsufficient blob visible area.");
// This test compares two almost identical display lists containing the a blob
// image. The only difference is that one of the display lists specifies a visible
// area for its blob image which is too small, causing frame building to run into
// missing tiles, and forcing it to exercise the code path where missing tiles are
// rendered synchronously on demand.
assert_eq!(self.wrench.device_pixel_ratio, 1.);
let window_size = self.window.get_inner_size();
let test_size = DeviceUintSize::new(800, 800);
let window_rect = DeviceUintRect::new(
DeviceUintPoint::new(0, window_size.height - test_size.height),
test_size,
);
let layout_size = LayoutSize::new(800.0, 800.0);
let image_size = size(800.0, 800.0);
let info = LayoutPrimitiveInfo::new(rect(0.0, 0.0, 800.0, 800.0));
let mut builder = DisplayListBuilder::new(self.wrench.root_pipeline_id, layout_size);
let mut txn = Transaction::new();
let blob_img1 = self.wrench.api.generate_image_key();
txn.add_image(
blob_img1,
ImageDescriptor::new(
image_size.width as u32,
image_size.height as u32,
ImageFormat::BGRA8,
false,
false
),
ImageData::new_blob_image(blob::serialize_blob(ColorU::new(50, 50, 150, 255))),
Some(100),
);
builder.push_image(
&info,
image_size,
image_size,
ImageRendering::Auto,
AlphaType::PremultipliedAlpha,
blob_img1,
);
self.submit_dl(&mut Epoch(0), layout_size, builder, &txn.resource_updates);
let pixels1 = self.render_and_get_pixels(window_rect);
let mut builder = DisplayListBuilder::new(self.wrench.root_pipeline_id, layout_size);
let mut txn = Transaction::new();
let blob_img2 = self.wrench.api.generate_image_key();
txn.add_image(
blob_img2,
ImageDescriptor::new(
image_size.width as u32,
image_size.height as u32,
ImageFormat::BGRA8,
false,
false
),
ImageData::new_blob_image(blob::serialize_blob(ColorU::new(50, 50, 150, 255))),
Some(100),
);
// Set a visible rectangle that is too small.
// This will force sync rasterization of the missing tiles during frame building.
txn.set_image_visible_area(blob_img2, NormalizedRect {
origin: point2(0.25, 0.25),
size: size2(0.1, 0.1),
});
builder.push_image(
&info,
image_size,
image_size,
ImageRendering::Auto,
AlphaType::PremultipliedAlpha,
blob_img2,
);
self.submit_dl(&mut Epoch(1), layout_size, builder, &txn.resource_updates);
let pixels2 = self.render_and_get_pixels(window_rect);
assert!(pixels1 == pixels2);
txn = Transaction::new();
txn.delete_image(blob_img1);
txn.delete_image(blob_img2);
self.wrench.api.update_resources(txn.resource_updates);
}
fn test_offscreen_blob(&mut self) {
println!("\toffscreen blob update.");
@ -466,12 +568,14 @@ impl<'a> RawtestHarness<'a> {
let img2_requested_inner = Arc::clone(&img2_requested);
// track the number of times that the second image has been requested
self.wrench.callbacks.lock().unwrap().request = Box::new(move |&desc| {
if desc.key == blob_img {
img1_requested_inner.fetch_add(1, Ordering::SeqCst);
}
if desc.key == blob_img2 {
img2_requested_inner.fetch_add(1, Ordering::SeqCst);
self.wrench.callbacks.lock().unwrap().request = Box::new(move |requests| {
for item in requests {
if item.request.key == blob_img {
img1_requested_inner.fetch_add(1, Ordering::SeqCst);
}
if item.request.key == blob_img2 {
img2_requested_inner.fetch_add(1, Ordering::SeqCst);
}
}
});

View File

@ -141,6 +141,7 @@ impl RonFrameWriter {
ResourceUpdate::DeleteFont(_) => {}
ResourceUpdate::AddFontInstance(_) => {}
ResourceUpdate::DeleteFontInstance(_) => {}
ResourceUpdate::SetImageVisibleArea(..) => {}
}
}
}

View File

@ -214,7 +214,7 @@ impl Wrench {
enable_clear_scissor: !no_scissor,
max_recorded_profiles: 16,
precache_shaders,
blob_image_renderer: Some(Box::new(blob::CheckerboardRenderer::new(callbacks.clone()))),
blob_image_handler: Some(Box::new(blob::CheckerboardRenderer::new(callbacks.clone()))),
disable_dual_source_blending,
chase_primitive,
..Default::default()

View File

@ -590,6 +590,7 @@ impl YamlFrameWriter {
);
}
ResourceUpdate::DeleteFontInstance(_) => {}
ResourceUpdate::SetImageVisibleArea(..) => {}
}
}
}