This is needed in particular to get a recent enough version of meson in
the stretch image, but should be generally beneficial.
Acked-by: Eric Engestrom <eric.engestrom@intel.com>
Pros:
* Less fragile due to not mixing packages from stretch and buster
* No longer need to use third-party LLVM packages
* The buster image now uses GCC 8 for C++ as well (previously 6 for C++,
8 for C), allowing to drop some hacks
Con:
* The stretch image now only uses GCC 6 for C as well as C++
* Need separate jobs for testing old LLVM versions
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
If installing new packages would require removing previously installed
ones, this flag causes apt-get to abort with an error instead,
preventing later obscure failures due to the missing packages.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Yes, some tests fail, but we can turn those into XFAILs at meson time.
Better to keep the things that work working than not cover them at all.
Unfortunately XPASS results will not cause the build to fail until we
update CI to meson 0.51 or newer.
Reviewed-by: Daniel Stone <daniels@collabora.com>
Seen a couple flakes on this one so far. Not sure if it is a real
driver problem or not, but skip it to unblock things.
Signed-off-by: Rob Clark <robdclark@chromium.org>
If people fix bugs without updating the expected-fails list, then we
end up with a lack of coverage of those failures in the future. Also,
some day down the line another developer ends up trying to figure out
if the bug was actually fixed or their environment is just failing to
reproduce it.
Suggested-by: Kristian H. Kristensen <hoegsberg@google.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Acked-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
This hasn't failed for me in ~5 minutes of looping over
dEQP-GLES3.functional.fbo.msaa.*
Reviewed-by: Adam Jackson <ajax@redhat.com>
Acked-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
These haven't failed for me in ~10 minutes of looping over
draw.random.*.
Reviewed-by: Adam Jackson <ajax@redhat.com>
Acked-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Since freedreno's kernel and GPU reset seem to be totally solid, we
don't need to have the complexity of the LAVA setup that panfrost has.
Instead, we can register some boards as shared gitlab runners and have
the jobs run out of a docker container just like we do for llvmpipe.
Just make sure that the DRI device node is passed through to the
containers in the gitlab config ('devices = ["/dev/dri"]' under
runners.docker).
If a runner fails (networking dies, kernel panic, etc.) it'll take out
one build but the rest can keep going since gitlab-runner is what
pulls jobs. Since the runner pulls jobs, it also means that they can
live behind firewalls instead of needing some public address to be
accessed by gitlab.fd.o.
For now, enable it just on db410c (A307) and cheza (A630) as those are
the hardware that I have plenty of. A307 is only testing GLES2 since
running all of GLES3 takes too long for the number of boards I've
brought up.
Acked-by: Rob Clark <robdclark@chromium.org>
Acked-by: Kenneth Graunke <kenneth@whitecape.org>
Sometimes you just want confirmation that dEQP really picked up the
driver we built you thought. This is not as good as one might like,
because git isn't present in the cross-build image.
Acked-by: Rob Clark <robdclark@chromium.org>
Acked-by: Kenneth Graunke <kenneth@whitecape.org>
A handful of tests on freedreno have been close to the watchdog
timeout, and now sporadically fail since range analysis has slowed
down the compiler for them.
Acked-by: Rob Clark <robdclark@chromium.org>
Acked-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
[ Michel Dänzer: Dropped jessie line from debian-install.sh again ]
Upgrading to a newer g++ causes older LLVM/clang packages to be
removed.
Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Something seems to have changed in Debian buster causing installation
of the other foreign packages to fail without this.
Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
The GLES2 CTS takes about 8 minutes of total runtime (at parallel 4 is
~2 minutes in the test stage if runners are free), while GLES3 takes
about 25. Since the GLES3 run is pretty expensive, just do a cheap
touch test of 1 out of every 10 tests in the test list on MRs, until
we can get the runtime down.
v2: Drop the full run for now until we can bring runtime down or bring
up a dedicated mesa runner.
Reviewed-by: Eric Engestrom <eric@engestrom.ch> (v1)
Reviewed-By: Gert Wollny <gert.wollny@collabora.com> (v1)
This is the start of doing CTS tests on merges to Mesa master. We use
the surfaceless platform so that we don't need to bother bringing up
weston or X11. The surface size is kept low to reduce runtime, but
this comes at the cost of many rendering tests skipping due to
too-small render targets (as we see the impact of Mesa on the shared
runner pool, we can reevaluate this and what set of CTS tests we want
to run).
We split the job up across 4 runners (each at 4 llvmpipe threads), so
that the job can load-balance across our shared runners and finish
sooner (since dEQP is very single-thread-performance bound).
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Now that we're running the drivers we build, building with
optimization is important for keeping our runtime down. Shaves about
4 minutes of runtime off of GLES2 CTS of llvmpipe at 64x64.
v2: Only switch meson-main until we enable CTS for other builds
on request by Michel.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
If we don't set DESTDIR, then the DEFAULT_DRIVER_DIR built into the
libraries is correct and we don't need to use LIBGL_DRIVERS_PATH and
friends for CI usage. Incidentally, this moves our installed paths
from /builds/anholt/mesa/install/usr/local/lib (for example) to
/builds/anholt/mesa/install/lib for simplicity.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
These could've been deleted a long time ago, but apparent we forgot.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
We also need to update wayland-protocols and libXrandr (and randrproto),
as they are too old for gdk3 (which gtk3 depends on).
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Currently we use the python package to manage repositories. At the same
time we also do that by hand - since it's a trivial echo to a file.
Stay consistent, remove the package and manage things manually.
Acked-by: Eric Engestrom <eric@engestrom.ch>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Signed-off-by: Emil Velikov <emil.velikov@collabora.com>
Now that helgrind is less upset and I've completed many successful
full shader-db runs, we should be able to enable freedreno shader-db
runs for Mesa checkins on the tiny public shader-db.
Reviewed-by: Rob Clark <robdclark@gmail.com>
This provides significant compiler coverage during CI at a fairly low
cost in CPU time (~17s per thread for 4 threads on
gst-gitlab-htz-runner3).
I'm leaving wget in the docker image, as once this is in master I'm
planning on having an automatic shader-db comparison between master
and the branch included in the artifacts. I also haven't done
freedreno yet, because it has some races when run in multithreaded
mode that I'm still tracking down.
Reviewed-by: Eric Engestrom <eric@engestrom.ch>
I introduced libdir for cross-builds so we could point at the
resulting drivers without per-arch dependencies, but I'd rather not
have to type x86_64-linux-whatever for non-cross-builds either.
Reviewed-by: Eric Engestrom <eric@engestrom.ch>
I don't particularly care about getting x86/ARM cross-build coverage
of all the window systems, but we do want to be building src/mesa/
(for x86 asm) and gallium drivers (for vc4 NEON asm). I'm also hoping
to use these build products for testing freedreno on actual HW (which
we do using surfaceless).
This increases the docker image from 1.4G to 1.5G.
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Acked-by: Eric Engestrom <eric@engestrom.ch>
Fixes following build problem:
Message: libdrm 2.4.99 needed because amdgpu has the highest requirement
Dependency libdrm_intel found: NO found '2.4.97' but need: '>=2.4.99'
Dependency libdrm_intel found: NO
meson.build:1178:4: ERROR: Invalid version of dependency, need 'libdrm_intel' ['>=2.4.99'] found '2.4.97'.
Signed-off-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Reviewed-by: Eric Engestrom <eric@engestrom.ch>
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
[ Michel Dänzer: Take changes affecting the docker image from !299,
plus remove the unzip package again before generating the image ]
And consolidate it all into a single job.
It doesn't take much longer than a single version, thanks to ccache.
Overall, this single job might be faster or at least use fewer CPU
cycles than the two jobs before, while covering thrice as many versions
of LLVM.
v2:
* Move "rm -rf _build" to meson-build.sh.
* Set GALLIUM_DRIVERS the same way both times in the meson-clover job,
for symmetry.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com> # v1
No functional change intended (except for no longer running meson
--version separately, as the version appears early in meson's output
anyway).
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
bash subshells don't inherit the -e option by default, so failures in
the subshell commands wouldn't cause the CI job to fail.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
We either compile these locally, or they are dependencies of other
packages we install.
v2:
* Adapt to leaving self-compiled packages untouched.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
We now use the C frontend of GCC 8 instead of 6 (required tweaking the
before_script for the clang job). We cannot use the C++ frontend of GCC
7 or newer yet, because upstream GCC 7 changed some C++ name mangling
stuff in backwards incompatible ways, and LLVM < 6.0 packages aren't
available in buster.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
The APT archive used by the Ubuntu docker image can be slow, even timing
out sometimes, causing spurious failures of the containers-build job.
The Debian docker image uses deb.debian.org, which is backed by a
content distribution network.
One downside is that stretch only has GCC 6, whereas bionic had 7.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>