Formerly components were printed from bottom to top, common notation is from
top to bottom. For reference check how we name formats, like in for example:
backends/graphics/openglsdl/openglsdl-graphics.cpp:190-230
backends/graphics/surfacesdl/surfacesdl-graphics.cpp:409-490
sherlock/scalpel/scalpel.cpp:207
This should improve the visual looks of many fonts. However, it might result
in the first line of the glyph to be drawn left of the position specified in
drawChar.
This allows clients to use the default FreeType2 render mode instead of light.
We really only use light as default because that's what looks best with the
font we use in our GUI right now (which is the same reason why formerly light
was always used in non-monochrome mode).
These issues were identified by the STACK tool.
By default, the C++ new operator will throw an exception on allocation
failure, rather than returning a null pointer.
The result is that testing the returned pointer for null is redundant
and _may_ be removed by the compiler. This is thus optimization
unstable and may result in incorrect behaviour at runtime.
However, we do not use exceptions as they are not supported by all
compilers and may be disabled.
To make this stable without removing the null check, you could qualify
the new operator call with std::nothrow to indicate that this should
return a null, rather than throwing an exception.
However, using (std::nothrow) was not desirable due to the Symbian
toolchain lacking a <new> header.
A global solution to this was also not easy by redefining "new" as "new
(std::nothrow)" due to custom constructors in NDS toolchain and various
common classes.
Also, this would then need explicit checks for OOM adding to all new
usages as per C malloc which is untidy.
For now to remove this optimisation unstable code is best as it is
likely to not be present anyway, and OOM will cause a system library
exception instead, even without exceptions enabled in the application
code.
The original alpha computation formula had a problem: If something was
drawn on top of a pixel that was already fully opaque, there would be
an overflow in the computed alpha, and the destination alpha would be
truncated to 0 (fully transparent).
In commit 264ba4a9 this formula was replaced with another one, which
did not have overflows but also was not correct.
This commits introduces a new formula, where the rounding errors have
been turned in another direction; drawing a fully opaque pixel on top
of a transparent one would result in a pixel which is almost, but not
fully, opaque. However, this is no problem in practice, since drawing
fully opaque pixels can be achieved with much less code as a special
case, so add that (also improves rendering speed).
The old computation had rounding issues, causing alpha to leak into
the red (usually) component. There's a much easier way to compute it
that does not lead to such problems: What should really happen is that
the two top bits of the A component should be set to 1 (thus adding
75% alpha). So compute it that way for speed and precision.
The two properties that control pixel packing and the size of the
surface need to be preserved for loadStream() to work correctly.
They are now under complete responsibility of the client.
This decoder needs to keep track of client parameters that control
how the pixels are going to be packaged, so the responsibility for
clearing the state has been moved on the client (using the destroy()
method on ImageDecoder).
As no client uses the IFFDecoder for more than one image at a time,
this change does not require updates to the engines. The only effect
is on Parallaction (BRA-Amiga), which can now control the way pixels
are packaged in mask and path bitmaps.
The default at cursor construction has been set to not visible.
This now requires an explicit call to setVisible(true) to show the
cursor, but a basic test shows that this seems to be OK and engines
which fail to do this would have been intermittently broken before.