Stage 1 of rename

TODO: re-enable glave build, advance API for glave

v2: get rid of outdated code in tri introduced by rebase
    rename wsi_null.c (olv)
This commit is contained in:
Courtney Goeltzenleuchter 2015-04-08 15:36:08 -06:00 committed by Chia-I Wu
parent 6c5c011aae
commit 0b0186026d
27 changed files with 3218 additions and 4687 deletions

14
.gitignore vendored
View File

@ -8,18 +8,6 @@ XGLConfig.h
*.so.*
icd/common/libicd.a
icd/intel/intel_gpa.c
loader/dispatch.c
loader/table_ops.h
tests/xgl_image_tests
tests/xgl_render_tests
tests/xglbase
tests/xglinfo
layers/xgl_dispatch_table_helper.h
layers/xgl_enum_string_helper.h
layers/xgl_generic_intercept_proc_helper.h
layers/xgl_struct_string_helper.h
layers/xgl_struct_wrappers.cpp
layers/xgl_struct_wrappers.h
_out64
out32/*
out64/*
@ -36,3 +24,5 @@ libs/Win32/Debug/*
*.vcxproj
*.sdf
*.filters
build
dbuild

View File

@ -68,7 +68,7 @@ The standard build process builds the icd, the icd loader and all the tests.
Example debug build:
```
cd YOUR_DEV_DIRECTORY # cd to the root of the xgl git repository
cd YOUR_DEV_DIRECTORY # cd to the root of the vk git repository
export KHRONOS_ACCOUNT_NAME= <subversion login name for svn checkout of BIL>
./update_external_sources.sh # fetches and builds glslang, llvm, LunarGLASS, and BIL
cmake -H. -Bdbuild -DCMAKE_BUILD_TYPE=Debug
@ -76,30 +76,30 @@ cd dbuild
make
```
To run XGL programs you must tell the icd loader where to find the libraries. Set the
environment variable LIBXGL_DRIVERS_PATH to the driver path. For example:
To run VK programs you must tell the icd loader where to find the libraries. Set the
environment variable LIBVK_DRIVERS_PATH to the driver path. For example:
```
export LIBXGL_DRIVERS_PATH=$PWD/icd/intel
export LIBVK_DRIVERS_PATH=$PWD/icd/intel
```
To enable debug and validation layers with your XGL programs you must tell the icd loader
where to find the layer libraries. Set the environment variable LIBXGL_LAYERS_PATH to
the layer folder and indicate the layers you want loaded via LIBXGL_LAYER_NAMES.
To enable debug and validation layers with your VK programs you must tell the icd loader
where to find the layer libraries. Set the environment variable LIBVK_LAYERS_PATH to
the layer folder and indicate the layers you want loaded via LIBVK_LAYER_NAMES.
For example, to enable the APIDump and DrawState layers, do:
```
export LIBXGL_LAYERS_PATH=$PWD/layers
export LIBXGL_LAYER_NAMES=APIDump:DrawState
export LIBVK_LAYERS_PATH=$PWD/layers
export LIBVK_LAYER_NAMES=APIDump:DrawState
```
##Linux Test
The test executibles can be found in the dbuild/tests directory. The tests use the Google
gtest infrastructure. Tests available so far:
- xglinfo: Report GPU properties
- xglbase: Test basic entry points
- xgl_blit_tests: Test XGL Blits (copy, clear, and resolve)
- xgl_image_tests: Test XGL image related calls needed by render_test
- xgl_render_tests: Render a single triangle with XGL. Triangle will be in a .ppm in
- vkinfo: Report GPU properties
- vkbase: Test basic entry points
- vk_blit_tests: Test VK Blits (copy, clear, and resolve)
- vk_image_tests: Test VK image related calls needed by render_test
- vk_render_tests: Render a single triangle with VK. Triangle will be in a .ppm in
the current directory at the end of the test.
##Linux Demos
@ -162,23 +162,23 @@ Cygwin is used in order to obtain a local copy of the Git repository, and to run
Example debug build:
```
cd GL-Next # cd to the root of the xgl git repository
cd GL-Next # cd to the root of the vk git repository
mkdir _out64
cd _out64
cmake -G "Visual Studio 12 Win64" -DCMAKE_BUILD_TYPE=Debug ..
```
At this point, you can use Windows Explorer to launch Visual Studio by double-clicking on the "XGL.sln" file in the _out64 folder. Once Visual Studio comes up, you can select "Debug" or "Release" from a drop-down list. You can start a build with either the menu (Build->Build Solution), or a keyboard shortcut (Ctrl+Shift+B). As part of the build process, Python scripts will create additional Visual Studio files and projects, along with additional source files. All of these auto-generated files are under the "_out64" folder.
At this point, you can use Windows Explorer to launch Visual Studio by double-clicking on the "VK.sln" file in the _out64 folder. Once Visual Studio comes up, you can select "Debug" or "Release" from a drop-down list. You can start a build with either the menu (Build->Build Solution), or a keyboard shortcut (Ctrl+Shift+B). As part of the build process, Python scripts will create additional Visual Studio files and projects, along with additional source files. All of these auto-generated files are under the "_out64" folder.
XGL programs must be able to find and use the XGL.dll libary. Make sure it is either installed in the C:\Windows\System32 folder, or the PATH enviroment variable includes the folder that it is located in.
VK programs must be able to find and use the VK.dll libary. Make sure it is either installed in the C:\Windows\System32 folder, or the PATH enviroment variable includes the folder that it is located in.
To run XGL programs you must have an appropriate ICD (installable client driver) that is either installed in the C:\Windows\System32 folder, or pointed to by the registry and/or an environment variable:
To run VK programs you must have an appropriate ICD (installable client driver) that is either installed in the C:\Windows\System32 folder, or pointed to by the registry and/or an environment variable:
- Registry:
- Root Key: HKEY_LOCAL_MACHINE
- Key: "SOFTWARE\XGL"
- Value: "XGL_DRIVERS_PATH" (semi-colon-delimited set of folders to look for ICDs)
- Environment Variable: "XGL_DRIVERS_PATH" (semi-colon-delimited set of folders to look for ICDs)
- Key: "SOFTWARE\VK"
- Value: "VK_DRIVERS_PATH" (semi-colon-delimited set of folders to look for ICDs)
- Environment Variable: "VK_DRIVERS_PATH" (semi-colon-delimited set of folders to look for ICDs)
Note: If both the registry value and environment variable are used, they are concatenated into a new semi-colon-delimited list of folders.
@ -188,24 +188,24 @@ Note: Environment variables on Windows cannot be set with Cygwin, but must be se
- Within the search box, type "environment variable" and click on "Edit the system environment variables" (or navigate there via "System and Security->System->Advanced system settings").
- This will launch a window with several tabs, one of which is "Advanced". Click on the "Environment Variables..." button.
- For either "User variables" or "System variables" click "New...".
- Enter "XGL_DRIVERS_PATH" as the variable name, and an appropriate Windows path to where your driver DLL is (e.g. C:\Users\username\GL-Next\_out64\icd\drivername\Debug).
- Enter "VK_DRIVERS_PATH" as the variable name, and an appropriate Windows path to where your driver DLL is (e.g. C:\Users\username\GL-Next\_out64\icd\drivername\Debug).
It is possible to specify multiple icd folders. Simply use a semi-colon (i.e. ";") to separate folders in the environment variable.
The icd loader searches in all of the folders for files that are named "XGL_*.dll" (e.g. "XGL_foo.dll"). It attempts to dynamically load these files, and look for appropriate functions.
The icd loader searches in all of the folders for files that are named "VK_*.dll" (e.g. "VK_foo.dll"). It attempts to dynamically load these files, and look for appropriate functions.
To enable debug and validation layers with your XGL programs you must tell the icd loader
To enable debug and validation layers with your VK programs you must tell the icd loader
where to find the layer libraries, and which ones you desire to use. The default folder for layers is C:\Windows\System32. Again, this can be pointed to by the registry and/or an environment variable:
- Registry:
- Root Key: HKEY_LOCAL_MACHINE
- Key: "System\XGL"
- Value: "XGL_LAYERS_PATH" (semi-colon-delimited set of folders to look for layers)
- Value: "XGL_LAYER_NAMES" (semi-colon-delimited list of layer names)
- Key: "System\VK"
- Value: "VK_LAYERS_PATH" (semi-colon-delimited set of folders to look for layers)
- Value: "VK_LAYER_NAMES" (semi-colon-delimited list of layer names)
- Environment Variables:
- "XGL_LAYERS_PATH" (semi-colon-delimited set of folders to look for layers)
- "XGL_LAYER_NAMES" (semi-colon-delimited list of layer names)
- "VK_LAYERS_PATH" (semi-colon-delimited set of folders to look for layers)
- "VK_LAYER_NAMES" (semi-colon-delimited list of layer names)
Note: If both the registry value and environment variable are used, they are concatenated into a new semi-colon-delimited list.
The icd loader searches in all of the folders for files that are named "XGLLayer*.dll" (e.g. "XGLLayerParamChecker.dll"). It attempts to dynamically load these files, and look for appropriate functions.
The icd loader searches in all of the folders for files that are named "VKLayer*.dll" (e.g. "VKLayerParamChecker.dll"). It attempts to dynamically load these files, and look for appropriate functions.

View File

@ -1,13 +1,13 @@
# Explicit GL (XGL) Ecosystem Components
# Explicit GL (VK) Ecosystem Components
*Version 0.8, 04 Feb 2015*
This project provides *open source* tools for XGL Developers.
This project provides *open source* tools for VK Developers.
## Introduction
XGL is an Explicit API, enabling direct control over how GPUs actually work. No validation, shader recompilation, memory management or synchronization is done inside an XGL driver. Applications have full control and responsibility. Any errors in how XGL is used are likely to result in a crash. This project provides layered utility libraries to ease development and help guide developers to proven safe patterns.
VK is an Explicit API, enabling direct control over how GPUs actually work. No validation, shader recompilation, memory management or synchronization is done inside an VK driver. Applications have full control and responsibility. Any errors in how VK is used are likely to result in a crash. This project provides layered utility libraries to ease development and help guide developers to proven safe patterns.
New with XGL in an extensible layered architecture that enables significant innovation in tools:
New with VK in an extensible layered architecture that enables significant innovation in tools:
- Cross IHV support enables tools vendors to plug into a common, extensible layer architecture
- Layered tools during development enable validating, debugging and profiling without production performance impact
- Modular validation architecture encourages many fine-grained layers--and new layers can be added easily
@ -19,9 +19,9 @@ insights into the specification as we approach an alpha header, and to assists t
demos for GDC.
The following components are available:
- XGL Library and header files, which include:
- VK Library and header files, which include:
- [*ICD Loader*](loader) and [*Layer Manager*](layers/README.md)
- Snapshot of *XGL* and *BIL* header files from [*Khronos*](www.khronos.org)
- Snapshot of *VK* and *BIL* header files from [*Khronos*](www.khronos.org)
- [*GLAVE Debugger*](tools/glave)
@ -33,7 +33,7 @@ The following components are available:
## New
- Updated loader, driver, demos, tests and many tools to use "alpha" xgl.h (~ version 47).
- Updated loader, driver, demos, tests and many tools to use "alpha" vulkan.h (~ version 47).
Supports new resource binding model, memory allocation, pixel FORMATs and
other updates.
APIDump layer is working with these new API elements.
@ -44,9 +44,9 @@ The following components are available:
## Prior updates
- XGL API trace and capture tools. See tools/glave/README.md for details.
- VK API trace and capture tools. See tools/glave/README.md for details.
- Sample driver now supports multiple render targets. Added TriangleMRT to test that functionality.
- Added XGL_SLOT_SHADER_TEXTURE_RESOURCE to xgl.h as a descriptor slot type to work around confusion in GLSL
- Added VK_SLOT_SHADER_TEXTURE_RESOURCE to vulkan.h as a descriptor slot type to work around confusion in GLSL
between textures and buffers as shader resources.
- Misc. fixes for layers and Intel sample driver
- Added mutex to APIDump, APIDumpFile and DrawState to prevent apparent threading issues using printf
@ -70,24 +70,24 @@ Information on how to enable the various Debug and Validation layers is in
## References
This version of the components are written based on the following preliminary specs and proposals:
- [**XGL Programers Reference**, 1 Jul 2014](https://cvs.khronos.org/svn/repos/oglc/trunk/nextgen/proposals/AMD/Explicit%20GL%20Programming%20Guide%20and%20API%20Reference.pdf)
- [**VK Programers Reference**, 1 Jul 2014](https://cvs.khronos.org/svn/repos/oglc/trunk/nextgen/proposals/AMD/Explicit%20GL%20Programming%20Guide%20and%20API%20Reference.pdf)
- [**BIL**, revision 29](https://cvs.khronos.org/svn/repos/oglc/trunk/nextgen/proposals/BIL/Specification/BIL.html)
## License
This work is intended to be released as open source under a BSD-style
license once the XGL specification is public. Until that time, this work
is covered by the Khronos NDA governing the details of the XGL API.
license once the VK specification is public. Until that time, this work
is covered by the Khronos NDA governing the details of the VK API.
## Acknowledgements
While this project is being developed by LunarG, Inc; there are many other
companies and individuals making this possible: Valve Software, funding
project development; Intel Corporation, providing full hardware specifications
and valuable technical feedback; AMD, providing XGL spec editor contributions;
and valuable technical feedback; AMD, providing VK spec editor contributions;
ARM, contributing a Chairman for this working group within Khronos; Nvidia,
providing an initial co-editor for the spec; Qualcomm for picking up the
co-editor's chair; and Khronos, for providing hosting within GitHub.
## Contact
If you have questions or comments about this driver; or you would like to contribute
directly to this effort, please contact us at XGL@LunarG.com; or if you prefer, via
directly to this effort, please contact us at VK@LunarG.com; or if you prefer, via
the GL Common mailing list: gl_common@khronos.org

View File

@ -21,7 +21,7 @@ set(CMAKE_C_FLAGS_DEBUG "${CMAKE_C_FLAGS_DEBUG} -DDEBUG")
set(CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG} -DDEBUG")
if (WIN32)
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -DXGL_PROTOTYPES -D_CRT_SECURE_NO_WARNINGS -DXCB_NVIDIA")
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -DVK_PROTOTYPES -D_CRT_SECURE_NO_WARNINGS -DXCB_NVIDIA")
add_library(XGL SHARED loader.c loader.h dirent_on_windows.c dispatch.c table_ops.h XGL.def)
set_target_properties(XGL PROPERTIES LINK_FLAGS "/DEF:${PROJECT_SOURCE_DIR}/loader/XGL.def")
@ -30,7 +30,7 @@ if (WIN32)
target_link_libraries(XGL)
endif()
if (NOT WIN32)
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -DXGL_PROTOTYPES -Wpointer-arith")
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -DVK_PROTOTYPES -Wpointer-arith")
add_library(XGL SHARED loader.c dispatch.c table_ops.h)
set_target_properties(XGL PROPERTIES SOVERSION 0)

View File

@ -1,8 +1,8 @@
# Loader Description
## Overview
The Loader implements the main XGL library (e.g. "XGL.dll" on Windows and
"libXGL.so" on Linux). It handles layer management and driver management. The
The Loader implements the main VK library (e.g. "VK.dll" on Windows and
"libVK.so" on Linux). It handles layer management and driver management. The
loader fully supports multi-gpu operation. As part of this, it dispatches API
calls to the correct driver, and to the correct layers, based on the GPU object
selected by the application.
@ -20,43 +20,43 @@ doesn't intercept a given entrypoint will be skipped for that entrypoint. The
loader supports layers that operate on multiple GPUs.
## Environment Variables
**LIBXGL\_DRIVERS\_PATH** directory for loader to search for ICD driver libraries to open
**LIBVK\_DRIVERS\_PATH** directory for loader to search for ICD driver libraries to open
**LIBXGL\_LAYERS\_PATH** directory for loader to search for layer libraries that may get activated and used at xglCreateDevice() time.
**LIBVK\_LAYERS\_PATH** directory for loader to search for layer libraries that may get activated and used at vkCreateDevice() time.
**LIBXGL\_LAYER\_NAMES** colon-separated list of layer names to be activated (e.g., LIBXGL\_LAYER\_NAMES=MemTracker:DrawState).
**LIBVK\_LAYER\_NAMES** colon-separated list of layer names to be activated (e.g., LIBVK\_LAYER\_NAMES=MemTracker:DrawState).
Note: Both of the LIBXGL\_*\_PATH variables may contain more than one directory. Each directory must be separated by one of the following characters, depending on your OS:
Note: Both of the LIBVK\_*\_PATH variables may contain more than one directory. Each directory must be separated by one of the following characters, depending on your OS:
- ";" on Windows
- ":" on Linux
## Interface to driver (ICD)
- xglEnumerateGpus exported
- xglCreateInstance exported
- xglDestroyInstance exported
- xglGetProcAddr exported and returns valid function pointers for all the XGL API entrypoints
- all objects created by ICD can be cast to (XGL\_LAYER\_DISPATCH\_TABLE \*\*)
- vkEnumerateGpus exported
- vkCreateInstance exported
- vkDestroyInstance exported
- vkGetProcAddr exported and returns valid function pointers for all the VK API entrypoints
- all objects created by ICD can be cast to (VK\_LAYER\_DISPATCH\_TABLE \*\*)
where the loader will replace the first entry with a pointer to the dispatch table which is
owned by the loader. This implies three things for ICD drivers:
1. The ICD must return a pointer for the opaque object handle
2. This pointer points to a regular C structure with the first entry being a pointer.
Note: for any C++ ICD's that implement XGL objects directly as C++ classes.
Note: for any C++ ICD's that implement VK objects directly as C++ classes.
The C++ compiler may put a vtable at offset zero, if your class is virtual.
In this case use a regular C structure (see below).
3. The reservedForLoader.loaderMagic member must be initialized with ICD\_LOADER\_MAGIC, as follows:
```
#include "xglIcd.h"
#include "vkIcd.h"
struct {
XGL_LOADER_DATA reservedForLoader; // Reserve space for pointer to loader's dispatch table
VK_LOADER_DATA reservedForLoader; // Reserve space for pointer to loader's dispatch table
myObjectClass myObj; // Your driver's C++ class
} xglObj;
} vkObj;
xglObj alloc_icd_obj()
vkObj alloc_icd_obj()
{
xglObj *newObj = alloc_obj();
vkObj *newObj = alloc_obj();
...
// Initialize pointer to loader's dispatch table with ICD_LOADER_MAGIC
set_loader_magic_value(newObj);
@ -68,5 +68,5 @@ Note: Both of the LIBXGL\_*\_PATH variables may contain more than one directory.
Additional Notes:
- The ICD may or may not implement a dispatch table.
- ICD entrypoints can be named anything including the offcial xgl name such as xglCreateDevice(). However, beware of interposing by dynamic OS library loaders if the offical names are used. On Linux, if offical names are used, the ICD library must be linked with -Bsymbolic.
- ICD entrypoints can be named anything including the offcial vk name such as vkCreateDevice(). However, beware of interposing by dynamic OS library loaders if the offical names are used. On Linux, if offical names are used, the ICD library must be linked with -Bsymbolic.

View File

@ -1,5 +1,5 @@
/*
* XGL
* Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
@ -43,7 +43,7 @@
#include "loader_platform.h"
#include "table_ops.h"
#include "loader.h"
#include "xglIcd.h"
#include "vkIcd.h"
// The following is #included again to catch certain OS-specific functions
// being used:
#include "loader_platform.h"
@ -66,12 +66,12 @@ struct layer_name_pair {
struct loader_icd {
const struct loader_scanned_icds *scanned_icds;
XGL_LAYER_DISPATCH_TABLE *loader_dispatch;
uint32_t layer_count[XGL_MAX_PHYSICAL_GPUS];
struct loader_layers layer_libs[XGL_MAX_PHYSICAL_GPUS][MAX_LAYER_LIBRARIES];
XGL_BASE_LAYER_OBJECT *wrappedGpus[XGL_MAX_PHYSICAL_GPUS];
VK_LAYER_DISPATCH_TABLE *loader_dispatch;
uint32_t layer_count[VK_MAX_PHYSICAL_GPUS];
struct loader_layers layer_libs[VK_MAX_PHYSICAL_GPUS][MAX_LAYER_LIBRARIES];
VK_BASE_LAYER_OBJECT *wrappedGpus[VK_MAX_PHYSICAL_GPUS];
uint32_t gpu_count;
XGL_BASE_LAYER_OBJECT *gpus;
VK_BASE_LAYER_OBJECT *gpus;
struct loader_icd *next;
};
@ -79,12 +79,12 @@ struct loader_icd {
struct loader_scanned_icds {
loader_platform_dl_handle handle;
xglGetProcAddrType GetProcAddr;
xglCreateInstanceType CreateInstance;
xglDestroyInstanceType DestroyInstance;
xglEnumerateGpusType EnumerateGpus;
xglGetExtensionSupportType GetExtensionSupport;
XGL_INSTANCE instance;
vkGetProcAddrType GetProcAddr;
vkCreateInstanceType CreateInstance;
vkDestroyInstanceType DestroyInstance;
vkEnumerateGpusType EnumerateGpus;
vkGetExtensionSupportType GetExtensionSupport;
VK_INSTANCE instance;
struct loader_scanned_icds *next;
};
@ -166,7 +166,7 @@ static char *loader_get_registry_and_env(const char *env_var,
size_t rtn_len;
registry_str = loader_get_registry_string(HKEY_LOCAL_MACHINE,
"Software\\XGL",
"Software\\VK",
registry_value);
registry_len = (registry_str) ? strlen(registry_str) : 0;
@ -203,7 +203,7 @@ static char *loader_get_registry_and_env(const char *env_var,
#endif // WIN32
static void loader_log(XGL_DBG_MSG_TYPE msg_type, int32_t msg_code,
static void loader_log(VK_DBG_MSG_TYPE msg_type, int32_t msg_code,
const char *format, ...)
{
char msg[256];
@ -269,14 +269,14 @@ static void loader_scanned_icd_add(const char *filename)
// Used to call: dlopen(filename, RTLD_LAZY);
handle = loader_platform_open_library(filename);
if (!handle) {
loader_log(XGL_DBG_MSG_WARNING, 0, loader_platform_open_library_error(filename));
loader_log(VK_DBG_MSG_WARNING, 0, loader_platform_open_library_error(filename));
return;
}
#define LOOKUP(func_ptr, func) do { \
func_ptr = (xgl ##func## Type) loader_platform_get_proc_address(handle, "xgl" #func); \
func_ptr = (vk ##func## Type) loader_platform_get_proc_address(handle, "vk" #func); \
if (!func_ptr) { \
loader_log(XGL_DBG_MSG_WARNING, 0, loader_platform_get_proc_address_error("xgl" #func)); \
loader_log(VK_DBG_MSG_WARNING, 0, loader_platform_get_proc_address_error("vk" #func)); \
return; \
} \
} while (0)
@ -290,7 +290,7 @@ static void loader_scanned_icd_add(const char *filename)
new_node = (struct loader_scanned_icds *) malloc(sizeof(struct loader_scanned_icds));
if (!new_node) {
loader_log(XGL_DBG_MSG_WARNING, 0, "Out of memory can't add icd");
loader_log(VK_DBG_MSG_WARNING, 0, "Out of memory can't add icd");
return;
}
@ -306,11 +306,11 @@ static void loader_scanned_icd_add(const char *filename)
/**
* Try to \c loader_icd_scan XGL driver(s).
* Try to \c loader_icd_scan VK driver(s).
*
* This function scans the default system path or path
* specified by the \c LIBXGL_DRIVERS_PATH environment variable in
* order to find loadable XGL ICDs with the name of libXGL_*.
* specified by the \c LIBVK_DRIVERS_PATH environment variable in
* order to find loadable VK ICDs with the name of libVK_*.
*
* \returns
* void; but side effect is to set loader_icd_scanned to true
@ -332,7 +332,7 @@ static void loader_icd_scan(void)
must_free_libPaths = true;
} else {
must_free_libPaths = false;
libPaths = DEFAULT_XGL_DRIVERS_PATH;
libPaths = DEFAULT_VK_DRIVERS_PATH;
}
#else // WIN32
if (geteuid() == getuid()) {
@ -340,7 +340,7 @@ static void loader_icd_scan(void)
libPaths = getenv(DRIVER_PATH_ENV);
}
if (libPaths == NULL) {
libPaths = DEFAULT_XGL_DRIVERS_PATH;
libPaths = DEFAULT_VK_DRIVERS_PATH;
}
#endif // WIN32
@ -363,18 +363,18 @@ static void loader_icd_scan(void)
if (sysdir) {
dent = readdir(sysdir);
while (dent) {
/* Look for ICDs starting with XGL_DRIVER_LIBRARY_PREFIX and
* ending with XGL_LIBRARY_SUFFIX
/* Look for ICDs starting with VK_DRIVER_LIBRARY_PREFIX and
* ending with VK_LIBRARY_SUFFIX
*/
if (!strncmp(dent->d_name,
XGL_DRIVER_LIBRARY_PREFIX,
XGL_DRIVER_LIBRARY_PREFIX_LEN)) {
VK_DRIVER_LIBRARY_PREFIX,
VK_DRIVER_LIBRARY_PREFIX_LEN)) {
uint32_t nlen = (uint32_t) strlen(dent->d_name);
const char *suf = dent->d_name + nlen - XGL_LIBRARY_SUFFIX_LEN;
if ((nlen > XGL_LIBRARY_SUFFIX_LEN) &&
const char *suf = dent->d_name + nlen - VK_LIBRARY_SUFFIX_LEN;
if ((nlen > VK_LIBRARY_SUFFIX_LEN) &&
!strncmp(suf,
XGL_LIBRARY_SUFFIX,
XGL_LIBRARY_SUFFIX_LEN)) {
VK_LIBRARY_SUFFIX,
VK_LIBRARY_SUFFIX_LEN)) {
snprintf(icd_library, 1024, "%s" DIRECTORY_SYMBOL "%s", p,dent->d_name);
loader_scanned_icd_add(icd_library);
}
@ -415,7 +415,7 @@ static void layer_lib_scan(void)
must_free_libPaths = true;
} else {
must_free_libPaths = false;
libPaths = DEFAULT_XGL_LAYERS_PATH;
libPaths = DEFAULT_VK_LAYERS_PATH;
}
#else // WIN32
if (geteuid() == getuid()) {
@ -423,7 +423,7 @@ static void layer_lib_scan(void)
libPaths = getenv(LAYERS_PATH_ENV);
}
if (libPaths == NULL) {
libPaths = DEFAULT_XGL_LAYERS_PATH;
libPaths = DEFAULT_VK_LAYERS_PATH;
}
#endif // WIN32
@ -473,18 +473,18 @@ static void layer_lib_scan(void)
if (curdir) {
dent = readdir(curdir);
while (dent) {
/* Look for layers starting with XGL_LAYER_LIBRARY_PREFIX and
* ending with XGL_LIBRARY_SUFFIX
/* Look for layers starting with VK_LAYER_LIBRARY_PREFIX and
* ending with VK_LIBRARY_SUFFIX
*/
if (!strncmp(dent->d_name,
XGL_LAYER_LIBRARY_PREFIX,
XGL_LAYER_LIBRARY_PREFIX_LEN)) {
VK_LAYER_LIBRARY_PREFIX,
VK_LAYER_LIBRARY_PREFIX_LEN)) {
uint32_t nlen = (uint32_t) strlen(dent->d_name);
const char *suf = dent->d_name + nlen - XGL_LIBRARY_SUFFIX_LEN;
if ((nlen > XGL_LIBRARY_SUFFIX_LEN) &&
const char *suf = dent->d_name + nlen - VK_LIBRARY_SUFFIX_LEN;
if ((nlen > VK_LIBRARY_SUFFIX_LEN) &&
!strncmp(suf,
XGL_LIBRARY_SUFFIX,
XGL_LIBRARY_SUFFIX_LEN)) {
VK_LIBRARY_SUFFIX,
VK_LIBRARY_SUFFIX_LEN)) {
loader_platform_dl_handle handle;
snprintf(temp_str, sizeof(temp_str), "%s" DIRECTORY_SYMBOL "%s",p,dent->d_name);
// Used to call: dlopen(temp_str, RTLD_LAZY)
@ -493,11 +493,11 @@ static void layer_lib_scan(void)
continue;
}
if (loader.scanned_layer_count == MAX_LAYER_LIBRARIES) {
loader_log(XGL_DBG_MSG_ERROR, 0, "%s ignored: max layer libraries exceed", temp_str);
loader_log(VK_DBG_MSG_ERROR, 0, "%s ignored: max layer libraries exceed", temp_str);
break;
}
if ((loader.scanned_layer_names[loader.scanned_layer_count] = malloc(strlen(temp_str) + 1)) == NULL) {
loader_log(XGL_DBG_MSG_ERROR, 0, "%s ignored: out of memory", temp_str);
loader_log(VK_DBG_MSG_ERROR, 0, "%s ignored: out of memory", temp_str);
break;
}
strcpy(loader.scanned_layer_names[loader.scanned_layer_count], temp_str);
@ -515,15 +515,15 @@ static void layer_lib_scan(void)
loader.layer_scanned = true;
}
static void loader_init_dispatch_table(XGL_LAYER_DISPATCH_TABLE *tab, xglGetProcAddrType fpGPA, XGL_PHYSICAL_GPU gpu)
static void loader_init_dispatch_table(VK_LAYER_DISPATCH_TABLE *tab, vkGetProcAddrType fpGPA, VK_PHYSICAL_GPU gpu)
{
loader_initialize_dispatch_table(tab, fpGPA, gpu);
if (tab->EnumerateLayers == NULL)
tab->EnumerateLayers = xglEnumerateLayers;
tab->EnumerateLayers = vkEnumerateLayers;
}
static struct loader_icd * loader_get_icd(const XGL_BASE_LAYER_OBJECT *gpu, uint32_t *gpu_index)
static struct loader_icd * loader_get_icd(const VK_BASE_LAYER_OBJECT *gpu, uint32_t *gpu_index)
{
for (struct loader_instance *inst = loader.instances; inst; inst = inst->next) {
for (struct loader_icd *icd = inst->icds; icd; icd = icd->next) {
@ -567,10 +567,10 @@ static void loader_init_layer_libs(struct loader_icd *icd, uint32_t gpu_index, s
obj->name[sizeof(obj->name) - 1] = '\0';
// Used to call: dlopen(pLayerNames[i].lib_name, RTLD_LAZY | RTLD_DEEPBIND)
if ((obj->lib_handle = loader_platform_open_library(pLayerNames[i].lib_name)) == NULL) {
loader_log(XGL_DBG_MSG_ERROR, 0, loader_platform_open_library_error(pLayerNames[i].lib_name));
loader_log(VK_DBG_MSG_ERROR, 0, loader_platform_open_library_error(pLayerNames[i].lib_name));
continue;
} else {
loader_log(XGL_DBG_MSG_UNKNOWN, 0, "Inserting layer %s from library %s", pLayerNames[i].layer_name, pLayerNames[i].lib_name);
loader_log(VK_DBG_MSG_UNKNOWN, 0, "Inserting layer %s from library %s", pLayerNames[i].layer_name, pLayerNames[i].lib_name);
}
free(pLayerNames[i].layer_name);
icd->layer_count[gpu_index]++;
@ -578,30 +578,30 @@ static void loader_init_layer_libs(struct loader_icd *icd, uint32_t gpu_index, s
}
}
static XGL_RESULT find_layer_extension(struct loader_icd *icd, uint32_t gpu_index, const char *pExtName, const char **lib_name)
static VK_RESULT find_layer_extension(struct loader_icd *icd, uint32_t gpu_index, const char *pExtName, const char **lib_name)
{
XGL_RESULT err;
VK_RESULT err;
char *search_name;
loader_platform_dl_handle handle;
xglGetExtensionSupportType fpGetExtensionSupport;
vkGetExtensionSupportType fpGetExtensionSupport;
/*
* The loader provides the abstraction that make layers and extensions work via
* the currently defined extension mechanism. That is, when app queries for an extension
* via xglGetExtensionSupport, the loader will call both the driver as well as any layers
* via vkGetExtensionSupport, the loader will call both the driver as well as any layers
* to see who implements that extension. Then, if the app enables the extension during
* xglCreateDevice the loader will find and load any layers that implement that extension.
* vkCreateDevice the loader will find and load any layers that implement that extension.
*/
// TODO: What if extension is in multiple places?
// TODO: Who should we ask first? Driver or layers? Do driver for now.
err = icd->scanned_icds[gpu_index].GetExtensionSupport((XGL_PHYSICAL_GPU) (icd->gpus[gpu_index].nextObject), pExtName);
if (err == XGL_SUCCESS) {
err = icd->scanned_icds[gpu_index].GetExtensionSupport((VK_PHYSICAL_GPU) (icd->gpus[gpu_index].nextObject), pExtName);
if (err == VK_SUCCESS) {
if (lib_name) {
*lib_name = NULL;
}
return XGL_SUCCESS;
return VK_SUCCESS;
}
for (unsigned int j = 0; j < loader.scanned_layer_count; j++) {
@ -610,19 +610,19 @@ static XGL_RESULT find_layer_extension(struct loader_icd *icd, uint32_t gpu_inde
if ((handle = loader_platform_open_library(search_name)) == NULL)
continue;
fpGetExtensionSupport = loader_platform_get_proc_address(handle, "xglGetExtensionSupport");
fpGetExtensionSupport = loader_platform_get_proc_address(handle, "vkGetExtensionSupport");
if (fpGetExtensionSupport != NULL) {
// Found layer's GetExtensionSupport call
err = fpGetExtensionSupport((XGL_PHYSICAL_GPU) (icd->gpus + gpu_index), pExtName);
err = fpGetExtensionSupport((VK_PHYSICAL_GPU) (icd->gpus + gpu_index), pExtName);
loader_platform_close_library(handle);
if (err == XGL_SUCCESS) {
if (err == VK_SUCCESS) {
if (lib_name) {
*lib_name = loader.scanned_layer_names[j];
}
return XGL_SUCCESS;
return VK_SUCCESS;
}
} else {
loader_platform_close_library(handle);
@ -630,12 +630,12 @@ static XGL_RESULT find_layer_extension(struct loader_icd *icd, uint32_t gpu_inde
// No GetExtensionSupport or GetExtensionSupport returned invalid extension
// for the layer, so test the layer name as if it is an extension name
// use default layer name based on library name XGL_LAYER_LIBRARY_PREFIX<name>.XGL_LIBRARY_SUFFIX
// use default layer name based on library name VK_LAYER_LIBRARY_PREFIX<name>.VK_LIBRARY_SUFFIX
char *pEnd;
size_t siz;
search_name = basename(search_name);
search_name += strlen(XGL_LAYER_LIBRARY_PREFIX);
search_name += strlen(VK_LAYER_LIBRARY_PREFIX);
pEnd = strrchr(search_name, '.');
siz = (int) (pEnd - search_name);
if (siz != strlen(pExtName))
@ -645,10 +645,10 @@ static XGL_RESULT find_layer_extension(struct loader_icd *icd, uint32_t gpu_inde
if (lib_name) {
*lib_name = loader.scanned_layer_names[j];
}
return XGL_SUCCESS;
return VK_SUCCESS;
}
}
return XGL_ERROR_INVALID_EXTENSION;
return VK_ERROR_INVALID_EXTENSION;
}
static uint32_t loader_get_layer_env(struct loader_icd *icd, uint32_t gpu_index, struct layer_name_pair *pLayerNames)
@ -691,7 +691,7 @@ static uint32_t loader_get_layer_env(struct loader_icd *icd, uint32_t gpu_index,
next++;
}
name = basename(p);
if (find_layer_extension(icd, gpu_index, name, &lib_name) != XGL_SUCCESS) {
if (find_layer_extension(icd, gpu_index, name, &lib_name) != VK_SUCCESS) {
p = next;
continue;
}
@ -714,7 +714,7 @@ static uint32_t loader_get_layer_env(struct loader_icd *icd, uint32_t gpu_index,
return count;
}
static uint32_t loader_get_layer_libs(struct loader_icd *icd, uint32_t gpu_index, const XGL_DEVICE_CREATE_INFO* pCreateInfo, struct layer_name_pair **ppLayerNames)
static uint32_t loader_get_layer_libs(struct loader_icd *icd, uint32_t gpu_index, const VK_DEVICE_CREATE_INFO* pCreateInfo, struct layer_name_pair **ppLayerNames)
{
static struct layer_name_pair layerNames[MAX_LAYER_LIBRARIES];
const char *lib_name = NULL;
@ -727,7 +727,7 @@ static uint32_t loader_get_layer_libs(struct loader_icd *icd, uint32_t gpu_index
for (uint32_t i = 0; i < pCreateInfo->extensionCount; i++) {
const char *pExtName = pCreateInfo->ppEnabledExtensionNames[i];
if (find_layer_extension(icd, gpu_index, pExtName, &lib_name) == XGL_SUCCESS) {
if (find_layer_extension(icd, gpu_index, pExtName, &lib_name) == VK_SUCCESS) {
uint32_t len;
/*
@ -788,31 +788,31 @@ static void loader_deactivate_layer(const struct loader_instance *instance)
}
}
extern uint32_t loader_activate_layers(XGL_PHYSICAL_GPU gpu, const XGL_DEVICE_CREATE_INFO* pCreateInfo)
extern uint32_t loader_activate_layers(VK_PHYSICAL_GPU gpu, const VK_DEVICE_CREATE_INFO* pCreateInfo)
{
uint32_t gpu_index;
uint32_t count;
struct layer_name_pair *pLayerNames;
struct loader_icd *icd = loader_get_icd((const XGL_BASE_LAYER_OBJECT *) gpu, &gpu_index);
struct loader_icd *icd = loader_get_icd((const VK_BASE_LAYER_OBJECT *) gpu, &gpu_index);
if (!icd)
return 0;
assert(gpu_index < XGL_MAX_PHYSICAL_GPUS);
assert(gpu_index < VK_MAX_PHYSICAL_GPUS);
/* activate any layer libraries */
if (!loader_layers_activated(icd, gpu_index)) {
XGL_BASE_LAYER_OBJECT *gpuObj = (XGL_BASE_LAYER_OBJECT *) gpu;
XGL_BASE_LAYER_OBJECT *nextGpuObj, *baseObj = gpuObj->baseObject;
xglGetProcAddrType nextGPA = xglGetProcAddr;
VK_BASE_LAYER_OBJECT *gpuObj = (VK_BASE_LAYER_OBJECT *) gpu;
VK_BASE_LAYER_OBJECT *nextGpuObj, *baseObj = gpuObj->baseObject;
vkGetProcAddrType nextGPA = vkGetProcAddr;
count = loader_get_layer_libs(icd, gpu_index, pCreateInfo, &pLayerNames);
if (!count)
return 0;
loader_init_layer_libs(icd, gpu_index, pLayerNames, count);
icd->wrappedGpus[gpu_index] = malloc(sizeof(XGL_BASE_LAYER_OBJECT) * icd->layer_count[gpu_index]);
icd->wrappedGpus[gpu_index] = malloc(sizeof(VK_BASE_LAYER_OBJECT) * icd->layer_count[gpu_index]);
if (! icd->wrappedGpus[gpu_index])
loader_log(XGL_DBG_MSG_ERROR, 0, "Failed to malloc Gpu objects for layer");
loader_log(VK_DBG_MSG_ERROR, 0, "Failed to malloc Gpu objects for layer");
for (int32_t i = icd->layer_count[gpu_index] - 1; i >= 0; i--) {
nextGpuObj = (icd->wrappedGpus[gpu_index] + i);
nextGpuObj->pGPA = nextGPA;
@ -822,18 +822,18 @@ extern uint32_t loader_activate_layers(XGL_PHYSICAL_GPU gpu, const XGL_DEVICE_CR
char funcStr[256];
snprintf(funcStr, 256, "%sGetProcAddr",icd->layer_libs[gpu_index][i].name);
if ((nextGPA = (xglGetProcAddrType) loader_platform_get_proc_address(icd->layer_libs[gpu_index][i].lib_handle, funcStr)) == NULL)
nextGPA = (xglGetProcAddrType) loader_platform_get_proc_address(icd->layer_libs[gpu_index][i].lib_handle, "xglGetProcAddr");
if ((nextGPA = (vkGetProcAddrType) loader_platform_get_proc_address(icd->layer_libs[gpu_index][i].lib_handle, funcStr)) == NULL)
nextGPA = (vkGetProcAddrType) loader_platform_get_proc_address(icd->layer_libs[gpu_index][i].lib_handle, "vkGetProcAddr");
if (!nextGPA) {
loader_log(XGL_DBG_MSG_ERROR, 0, "Failed to find xglGetProcAddr in layer %s", icd->layer_libs[gpu_index][i].name);
loader_log(VK_DBG_MSG_ERROR, 0, "Failed to find vkGetProcAddr in layer %s", icd->layer_libs[gpu_index][i].name);
continue;
}
if (i == 0) {
loader_init_dispatch_table(icd->loader_dispatch + gpu_index, nextGPA, gpuObj);
//Insert the new wrapped objects into the list with loader object at head
((XGL_BASE_LAYER_OBJECT *) gpu)->nextObject = gpuObj;
((XGL_BASE_LAYER_OBJECT *) gpu)->pGPA = nextGPA;
((VK_BASE_LAYER_OBJECT *) gpu)->nextObject = gpuObj;
((VK_BASE_LAYER_OBJECT *) gpu)->pGPA = nextGPA;
gpuObj = icd->wrappedGpus[gpu_index] + icd->layer_count[gpu_index] - 1;
gpuObj->nextObject = baseObj;
gpuObj->pGPA = icd->scanned_icds->GetProcAddr;
@ -846,27 +846,27 @@ extern uint32_t loader_activate_layers(XGL_PHYSICAL_GPU gpu, const XGL_DEVICE_CR
count = loader_get_layer_libs(icd, gpu_index, pCreateInfo, &pLayerNames);
for (uint32_t i = 0; i < count; i++) {
if (strcmp(icd->layer_libs[gpu_index][i].name, pLayerNames[i].layer_name)) {
loader_log(XGL_DBG_MSG_ERROR, 0, "Layers activated != Layers requested");
loader_log(VK_DBG_MSG_ERROR, 0, "Layers activated != Layers requested");
break;
}
}
if (count != icd->layer_count[gpu_index]) {
loader_log(XGL_DBG_MSG_ERROR, 0, "Number of Layers activated != number requested");
loader_log(VK_DBG_MSG_ERROR, 0, "Number of Layers activated != number requested");
}
}
return icd->layer_count[gpu_index];
}
LOADER_EXPORT XGL_RESULT XGLAPI xglCreateInstance(
const XGL_INSTANCE_CREATE_INFO* pCreateInfo,
XGL_INSTANCE* pInstance)
LOADER_EXPORT VK_RESULT VKAPI vkCreateInstance(
const VK_INSTANCE_CREATE_INFO* pCreateInfo,
VK_INSTANCE* pInstance)
{
static LOADER_PLATFORM_THREAD_ONCE_DECLARATION(once_icd);
static LOADER_PLATFORM_THREAD_ONCE_DECLARATION(once_layer);
struct loader_instance *ptr_instance = NULL;
struct loader_scanned_icds *scanned_icds;
struct loader_icd *icd;
XGL_RESULT res = XGL_ERROR_INITIALIZATION_FAILED;
VK_RESULT res = VK_ERROR_INITIALIZATION_FAILED;
/* Scan/discover all ICD libraries in a single-threaded manner */
loader_platform_thread_once(&once_icd, loader_icd_scan);
@ -876,7 +876,7 @@ LOADER_EXPORT XGL_RESULT XGLAPI xglCreateInstance(
ptr_instance = (struct loader_instance*) malloc(sizeof(struct loader_instance));
if (ptr_instance == NULL) {
return XGL_ERROR_OUT_OF_MEMORY;
return VK_ERROR_OUT_OF_MEMORY;
}
memset(ptr_instance, 0, sizeof(struct loader_instance));
@ -889,12 +889,12 @@ LOADER_EXPORT XGL_RESULT XGLAPI xglCreateInstance(
if (icd) {
res = scanned_icds->CreateInstance(pCreateInfo,
&(scanned_icds->instance));
if (res != XGL_SUCCESS)
if (res != VK_SUCCESS)
{
ptr_instance->icds = ptr_instance->icds->next;
loader_icd_destroy(icd);
scanned_icds->instance = NULL;
loader_log(XGL_DBG_MSG_WARNING, 0,
loader_log(VK_DBG_MSG_WARNING, 0,
"ICD ignored: failed to CreateInstance on device");
}
}
@ -902,19 +902,19 @@ LOADER_EXPORT XGL_RESULT XGLAPI xglCreateInstance(
}
if (ptr_instance->icds == NULL) {
return XGL_ERROR_INCOMPATIBLE_DRIVER;
return VK_ERROR_INCOMPATIBLE_DRIVER;
}
*pInstance = (XGL_INSTANCE) ptr_instance;
return XGL_SUCCESS;
*pInstance = (VK_INSTANCE) ptr_instance;
return VK_SUCCESS;
}
LOADER_EXPORT XGL_RESULT XGLAPI xglDestroyInstance(
XGL_INSTANCE instance)
LOADER_EXPORT VK_RESULT VKAPI vkDestroyInstance(
VK_INSTANCE instance)
{
struct loader_instance *ptr_instance = (struct loader_instance *) instance;
struct loader_scanned_icds *scanned_icds;
XGL_RESULT res;
VK_RESULT res;
// Remove this instance from the list of instances:
struct loader_instance *prev = NULL;
@ -933,7 +933,7 @@ LOADER_EXPORT XGL_RESULT XGLAPI xglDestroyInstance(
}
if (next == NULL) {
// This must be an invalid instance handle or empty list
return XGL_ERROR_INVALID_HANDLE;
return VK_ERROR_INVALID_HANDLE;
}
// cleanup any prior layer initializations
@ -943,8 +943,8 @@ LOADER_EXPORT XGL_RESULT XGLAPI xglDestroyInstance(
while (scanned_icds) {
if (scanned_icds->instance)
res = scanned_icds->DestroyInstance(scanned_icds->instance);
if (res != XGL_SUCCESS)
loader_log(XGL_DBG_MSG_WARNING, 0,
if (res != VK_SUCCESS)
loader_log(VK_DBG_MSG_WARNING, 0,
"ICD ignored: failed to DestroyInstance on device");
scanned_icds->instance = NULL;
scanned_icds = scanned_icds->next;
@ -952,43 +952,43 @@ LOADER_EXPORT XGL_RESULT XGLAPI xglDestroyInstance(
free(ptr_instance);
return XGL_SUCCESS;
return VK_SUCCESS;
}
LOADER_EXPORT XGL_RESULT XGLAPI xglEnumerateGpus(
LOADER_EXPORT VK_RESULT VKAPI vkEnumerateGpus(
XGL_INSTANCE instance,
VK_INSTANCE instance,
uint32_t maxGpus,
uint32_t* pGpuCount,
XGL_PHYSICAL_GPU* pGpus)
VK_PHYSICAL_GPU* pGpus)
{
struct loader_instance *ptr_instance = (struct loader_instance *) instance;
struct loader_icd *icd;
uint32_t count = 0;
XGL_RESULT res;
VK_RESULT res;
//in spirit of XGL don't error check on the instance parameter
//in spirit of VK don't error check on the instance parameter
icd = ptr_instance->icds;
while (icd) {
XGL_PHYSICAL_GPU gpus[XGL_MAX_PHYSICAL_GPUS];
XGL_BASE_LAYER_OBJECT * wrapped_gpus;
xglGetProcAddrType get_proc_addr = icd->scanned_icds->GetProcAddr;
VK_PHYSICAL_GPU gpus[VK_MAX_PHYSICAL_GPUS];
VK_BASE_LAYER_OBJECT * wrapped_gpus;
vkGetProcAddrType get_proc_addr = icd->scanned_icds->GetProcAddr;
uint32_t n, max = maxGpus - count;
if (max > XGL_MAX_PHYSICAL_GPUS) {
max = XGL_MAX_PHYSICAL_GPUS;
if (max > VK_MAX_PHYSICAL_GPUS) {
max = VK_MAX_PHYSICAL_GPUS;
}
res = icd->scanned_icds->EnumerateGpus(icd->scanned_icds->instance,
max, &n,
gpus);
if (res == XGL_SUCCESS && n) {
wrapped_gpus = (XGL_BASE_LAYER_OBJECT*) malloc(n *
sizeof(XGL_BASE_LAYER_OBJECT));
if (res == VK_SUCCESS && n) {
wrapped_gpus = (VK_BASE_LAYER_OBJECT*) malloc(n *
sizeof(VK_BASE_LAYER_OBJECT));
icd->gpus = wrapped_gpus;
icd->gpu_count = n;
icd->loader_dispatch = (XGL_LAYER_DISPATCH_TABLE *) malloc(n *
sizeof(XGL_LAYER_DISPATCH_TABLE));
icd->loader_dispatch = (VK_LAYER_DISPATCH_TABLE *) malloc(n *
sizeof(VK_LAYER_DISPATCH_TABLE));
for (unsigned int i = 0; i < n; i++) {
(wrapped_gpus + i)->baseObject = gpus[i];
(wrapped_gpus + i)->pGPA = get_proc_addr;
@ -999,13 +999,13 @@ LOADER_EXPORT XGL_RESULT XGLAPI xglEnumerateGpus(
/* Verify ICD compatibility */
if (!valid_loader_magic_value(gpus[i])) {
loader_log(XGL_DBG_MSG_WARNING, 0,
loader_log(VK_DBG_MSG_WARNING, 0,
"Loader: Incompatible ICD, first dword must be initialized to ICD_LOADER_MAGIC. See loader/README.md for details.\n");
assert(0);
}
const XGL_LAYER_DISPATCH_TABLE **disp;
disp = (const XGL_LAYER_DISPATCH_TABLE **) gpus[i];
const VK_LAYER_DISPATCH_TABLE **disp;
disp = (const VK_LAYER_DISPATCH_TABLE **) gpus[i];
*disp = icd->loader_dispatch + i;
}
@ -1021,16 +1021,16 @@ LOADER_EXPORT XGL_RESULT XGLAPI xglEnumerateGpus(
*pGpuCount = count;
return (count > 0) ? XGL_SUCCESS : res;
return (count > 0) ? VK_SUCCESS : res;
}
LOADER_EXPORT void * XGLAPI xglGetProcAddr(XGL_PHYSICAL_GPU gpu, const char * pName)
LOADER_EXPORT void * VKAPI vkGetProcAddr(VK_PHYSICAL_GPU gpu, const char * pName)
{
if (gpu == NULL) {
return NULL;
}
XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
XGL_LAYER_DISPATCH_TABLE * disp_table = * (XGL_LAYER_DISPATCH_TABLE **) gpuw->baseObject;
VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
VK_LAYER_DISPATCH_TABLE * disp_table = * (VK_LAYER_DISPATCH_TABLE **) gpuw->baseObject;
void *addr;
if (disp_table == NULL)
@ -1046,33 +1046,33 @@ LOADER_EXPORT void * XGLAPI xglGetProcAddr(XGL_PHYSICAL_GPU gpu, const char * pN
}
}
LOADER_EXPORT XGL_RESULT XGLAPI xglGetExtensionSupport(XGL_PHYSICAL_GPU gpu, const char *pExtName)
LOADER_EXPORT VK_RESULT VKAPI vkGetExtensionSupport(VK_PHYSICAL_GPU gpu, const char *pExtName)
{
uint32_t gpu_index;
struct loader_icd *icd = loader_get_icd((const XGL_BASE_LAYER_OBJECT *) gpu, &gpu_index);
struct loader_icd *icd = loader_get_icd((const VK_BASE_LAYER_OBJECT *) gpu, &gpu_index);
if (!icd)
return XGL_ERROR_UNAVAILABLE;
return VK_ERROR_UNAVAILABLE;
return find_layer_extension(icd, gpu_index, pExtName, NULL);
}
LOADER_EXPORT XGL_RESULT XGLAPI xglEnumerateLayers(XGL_PHYSICAL_GPU gpu, size_t maxLayerCount, size_t maxStringSize, size_t* pOutLayerCount, char* const* pOutLayers, void* pReserved)
LOADER_EXPORT VK_RESULT VKAPI vkEnumerateLayers(VK_PHYSICAL_GPU gpu, size_t maxLayerCount, size_t maxStringSize, size_t* pOutLayerCount, char* const* pOutLayers, void* pReserved)
{
uint32_t gpu_index;
size_t count = 0;
char *lib_name;
struct loader_icd *icd = loader_get_icd((const XGL_BASE_LAYER_OBJECT *) gpu, &gpu_index);
struct loader_icd *icd = loader_get_icd((const VK_BASE_LAYER_OBJECT *) gpu, &gpu_index);
loader_platform_dl_handle handle;
xglEnumerateLayersType fpEnumerateLayers;
vkEnumerateLayersType fpEnumerateLayers;
char layer_buf[16][256];
char * layers[16];
if (pOutLayerCount == NULL || pOutLayers == NULL)
return XGL_ERROR_INVALID_POINTER;
return VK_ERROR_INVALID_POINTER;
if (!icd)
return XGL_ERROR_UNAVAILABLE;
return VK_ERROR_UNAVAILABLE;
for (int i = 0; i < 16; i++)
layers[i] = &layer_buf[i][0];
@ -1082,14 +1082,14 @@ LOADER_EXPORT XGL_RESULT XGLAPI xglEnumerateLayers(XGL_PHYSICAL_GPU gpu, size_t
// Used to call: dlopen(*lib_name, RTLD_LAZY)
if ((handle = loader_platform_open_library(lib_name)) == NULL)
continue;
if ((fpEnumerateLayers = loader_platform_get_proc_address(handle, "xglEnumerateLayers")) == NULL) {
//use default layer name based on library name XGL_LAYER_LIBRARY_PREFIX<name>.XGL_LIBRARY_SUFFIX
if ((fpEnumerateLayers = loader_platform_get_proc_address(handle, "vkEnumerateLayers")) == NULL) {
//use default layer name based on library name VK_LAYER_LIBRARY_PREFIX<name>.VK_LIBRARY_SUFFIX
char *pEnd, *cpyStr;
size_t siz;
loader_platform_close_library(handle);
lib_name = basename(lib_name);
pEnd = strrchr(lib_name, '.');
siz = (int) (pEnd - lib_name - strlen(XGL_LAYER_LIBRARY_PREFIX) + 1);
siz = (int) (pEnd - lib_name - strlen(VK_LAYER_LIBRARY_PREFIX) + 1);
if (pEnd == NULL || siz <= 0)
continue;
cpyStr = malloc(siz);
@ -1097,7 +1097,7 @@ LOADER_EXPORT XGL_RESULT XGLAPI xglEnumerateLayers(XGL_PHYSICAL_GPU gpu, size_t
free(cpyStr);
continue;
}
strncpy(cpyStr, lib_name + strlen(XGL_LAYER_LIBRARY_PREFIX), siz);
strncpy(cpyStr, lib_name + strlen(VK_LAYER_LIBRARY_PREFIX), siz);
cpyStr[siz - 1] = '\0';
if (siz > maxStringSize)
siz = (int) maxStringSize;
@ -1108,11 +1108,11 @@ LOADER_EXPORT XGL_RESULT XGLAPI xglEnumerateLayers(XGL_PHYSICAL_GPU gpu, size_t
} else {
size_t cnt;
uint32_t n;
XGL_RESULT res;
VK_RESULT res;
n = (uint32_t) ((maxStringSize < 256) ? maxStringSize : 256);
res = fpEnumerateLayers(NULL, 16, n, &cnt, layers, (char *) icd->gpus + gpu_index);
loader_platform_close_library(handle);
if (res != XGL_SUCCESS)
if (res != VK_SUCCESS)
continue;
if (cnt + count > maxLayerCount)
cnt = maxLayerCount - count;
@ -1127,18 +1127,18 @@ LOADER_EXPORT XGL_RESULT XGLAPI xglEnumerateLayers(XGL_PHYSICAL_GPU gpu, size_t
*pOutLayerCount = count;
return XGL_SUCCESS;
return VK_SUCCESS;
}
LOADER_EXPORT XGL_RESULT XGLAPI xglDbgRegisterMsgCallback(XGL_INSTANCE instance, XGL_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback, void* pUserData)
LOADER_EXPORT VK_RESULT VKAPI vkDbgRegisterMsgCallback(VK_INSTANCE instance, VK_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback, void* pUserData)
{
const struct loader_icd *icd;
struct loader_instance *inst;
XGL_RESULT res;
VK_RESULT res;
uint32_t gpu_idx;
if (instance == XGL_NULL_HANDLE)
return XGL_ERROR_INVALID_HANDLE;
if (instance == VK_NULL_HANDLE)
return VK_ERROR_INVALID_HANDLE;
assert(loader.icds_scanned);
@ -1147,19 +1147,19 @@ LOADER_EXPORT XGL_RESULT XGLAPI xglDbgRegisterMsgCallback(XGL_INSTANCE instance,
break;
}
if (inst == XGL_NULL_HANDLE)
return XGL_ERROR_INVALID_HANDLE;
if (inst == VK_NULL_HANDLE)
return VK_ERROR_INVALID_HANDLE;
for (icd = inst->icds; icd; icd = icd->next) {
for (uint32_t i = 0; i < icd->gpu_count; i++) {
res = (icd->loader_dispatch + i)->DbgRegisterMsgCallback(icd->scanned_icds->instance,
pfnMsgCallback, pUserData);
if (res != XGL_SUCCESS) {
if (res != VK_SUCCESS) {
gpu_idx = i;
break;
}
}
if (res != XGL_SUCCESS)
if (res != VK_SUCCESS)
break;
}
@ -1178,15 +1178,15 @@ LOADER_EXPORT XGL_RESULT XGLAPI xglDbgRegisterMsgCallback(XGL_INSTANCE instance,
return res;
}
return XGL_SUCCESS;
return VK_SUCCESS;
}
LOADER_EXPORT XGL_RESULT XGLAPI xglDbgUnregisterMsgCallback(XGL_INSTANCE instance, XGL_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback)
LOADER_EXPORT VK_RESULT VKAPI vkDbgUnregisterMsgCallback(VK_INSTANCE instance, VK_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback)
{
XGL_RESULT res = XGL_SUCCESS;
VK_RESULT res = VK_SUCCESS;
struct loader_instance *inst;
if (instance == XGL_NULL_HANDLE)
return XGL_ERROR_INVALID_HANDLE;
if (instance == VK_NULL_HANDLE)
return VK_ERROR_INVALID_HANDLE;
assert(loader.icds_scanned);
@ -1195,14 +1195,14 @@ LOADER_EXPORT XGL_RESULT XGLAPI xglDbgUnregisterMsgCallback(XGL_INSTANCE instanc
break;
}
if (inst == XGL_NULL_HANDLE)
return XGL_ERROR_INVALID_HANDLE;
if (inst == VK_NULL_HANDLE)
return VK_ERROR_INVALID_HANDLE;
for (const struct loader_icd * icd = inst->icds; icd; icd = icd->next) {
for (uint32_t i = 0; i < icd->gpu_count; i++) {
XGL_RESULT r;
VK_RESULT r;
r = (icd->loader_dispatch + i)->DbgUnregisterMsgCallback(icd->scanned_icds->instance, pfnMsgCallback);
if (r != XGL_SUCCESS) {
if (r != VK_SUCCESS) {
res = r;
}
}
@ -1210,12 +1210,12 @@ LOADER_EXPORT XGL_RESULT XGLAPI xglDbgUnregisterMsgCallback(XGL_INSTANCE instanc
return res;
}
LOADER_EXPORT XGL_RESULT XGLAPI xglDbgSetGlobalOption(XGL_INSTANCE instance, XGL_DBG_GLOBAL_OPTION dbgOption, size_t dataSize, const void* pData)
LOADER_EXPORT VK_RESULT VKAPI vkDbgSetGlobalOption(VK_INSTANCE instance, VK_DBG_GLOBAL_OPTION dbgOption, size_t dataSize, const void* pData)
{
XGL_RESULT res = XGL_SUCCESS;
VK_RESULT res = VK_SUCCESS;
struct loader_instance *inst;
if (instance == XGL_NULL_HANDLE)
return XGL_ERROR_INVALID_HANDLE;
if (instance == VK_NULL_HANDLE)
return VK_ERROR_INVALID_HANDLE;
assert(loader.icds_scanned);
@ -1224,15 +1224,15 @@ LOADER_EXPORT XGL_RESULT XGLAPI xglDbgSetGlobalOption(XGL_INSTANCE instance, XGL
break;
}
if (inst == XGL_NULL_HANDLE)
return XGL_ERROR_INVALID_HANDLE;
if (inst == VK_NULL_HANDLE)
return VK_ERROR_INVALID_HANDLE;
for (const struct loader_icd * icd = inst->icds; icd; icd = icd->next) {
for (uint32_t i = 0; i < icd->gpu_count; i++) {
XGL_RESULT r;
VK_RESULT r;
r = (icd->loader_dispatch + i)->DbgSetGlobalOption(icd->scanned_icds->instance, dbgOption,
dataSize, pData);
/* unfortunately we cannot roll back */
if (r != XGL_SUCCESS) {
if (r != VK_SUCCESS) {
res = r;
}
}

View File

@ -1,5 +1,5 @@
/*
* XGL
* Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
@ -28,15 +28,15 @@
#ifndef LOADER_H
#define LOADER_H
#include <xgl.h>
#include <xglDbg.h>
#include <vulkan.h>
#include <vkDbg.h>
#if defined(WIN32)
// FIXME: NEED WINDOWS EQUIVALENT
#else // WIN32
#include <xglWsiX11Ext.h>
#include <vkWsiX11Ext.h>
#endif // WIN32
#include <xglLayer.h>
#include <xglIcd.h>
#include <vkLayer.h>
#include <vkIcd.h>
#include <assert.h>
#if defined(__GNUC__) && __GNUC__ >= 4
@ -65,16 +65,16 @@ static inline void loader_init_data(void *obj, const void *data)
loader_set_data(obj, data);
}
static inline void *loader_unwrap_gpu(XGL_PHYSICAL_GPU *gpu)
static inline void *loader_unwrap_gpu(VK_PHYSICAL_GPU *gpu)
{
const XGL_BASE_LAYER_OBJECT *wrap = (const XGL_BASE_LAYER_OBJECT *) *gpu;
const VK_BASE_LAYER_OBJECT *wrap = (const VK_BASE_LAYER_OBJECT *) *gpu;
*gpu = (XGL_PHYSICAL_GPU) wrap->nextObject;
*gpu = (VK_PHYSICAL_GPU) wrap->nextObject;
return loader_get_data(wrap->baseObject);
}
extern uint32_t loader_activate_layers(XGL_PHYSICAL_GPU gpu, const XGL_DEVICE_CREATE_INFO* pCreateInfo);
extern uint32_t loader_activate_layers(VK_PHYSICAL_GPU gpu, const VK_DEVICE_CREATE_INFO* pCreateInfo);
#define MAX_LAYER_LIBRARIES 64
#endif /* LOADER_H */

View File

@ -1,5 +1,5 @@
/*
* XGL
* Vulkan
*
* Copyright (C) 2015 LunarG, Inc.
* Copyright 2014 Valve Software
@ -42,26 +42,26 @@
#include <pthread.h>
#include <assert.h>
// XGL Library Filenames, Paths, etc.:
// VK Library Filenames, Paths, etc.:
#define PATH_SEPERATOR ':'
#define DIRECTORY_SYMBOL "/"
#define DRIVER_PATH_ENV "LIBXGL_DRIVERS_PATH"
#define LAYERS_PATH_ENV "LIBXGL_LAYERS_PATH"
#define LAYER_NAMES_ENV "LIBXGL_LAYER_NAMES"
#ifndef DEFAULT_XGL_DRIVERS_PATH
#define DRIVER_PATH_ENV "LIBVK_DRIVERS_PATH"
#define LAYERS_PATH_ENV "LIBVK_LAYERS_PATH"
#define LAYER_NAMES_ENV "LIBVK_LAYER_NAMES"
#ifndef DEFAULT_VK_DRIVERS_PATH
// TODO: Is this a good default location?
// Need to search for both 32bit and 64bit ICDs
#define DEFAULT_XGL_DRIVERS_PATH "/usr/lib/i386-linux-gnu/xgl:/usr/lib/x86_64-linux-gnu/xgl"
#define XGL_DRIVER_LIBRARY_PREFIX "libXGL_"
#define XGL_DRIVER_LIBRARY_PREFIX_LEN 7
#define XGL_LAYER_LIBRARY_PREFIX "libXGLLayer"
#define XGL_LAYER_LIBRARY_PREFIX_LEN 11
#define XGL_LIBRARY_SUFFIX ".so"
#define XGL_LIBRARY_SUFFIX_LEN 3
#endif // DEFAULT_XGL_DRIVERS_PATH
#ifndef DEFAULT_XGL_LAYERS_PATH
#define DEFAULT_VK_DRIVERS_PATH "/usr/lib/i386-linux-gnu/vk:/usr/lib/x86_64-linux-gnu/vk"
#define VK_DRIVER_LIBRARY_PREFIX "libVK_"
#define VK_DRIVER_LIBRARY_PREFIX_LEN 6
#define VK_LAYER_LIBRARY_PREFIX "libVKLayer"
#define VK_LAYER_LIBRARY_PREFIX_LEN 10
#define VK_LIBRARY_SUFFIX ".so"
#define VK_LIBRARY_SUFFIX_LEN 3
#endif // DEFAULT_VK_DRIVERS_PATH
#ifndef DEFAULT_VK_LAYERS_PATH
// TODO: Are these good default locations?
#define DEFAULT_XGL_LAYERS_PATH ".:/usr/lib/i386-linux-gnu/xgl:/usr/lib/x86_64-linux-gnu/xgl"
#define DEFAULT_VK_LAYERS_PATH ".:/usr/lib/i386-linux-gnu/vk:/usr/lib/x86_64-linux-gnu/vk"
#endif
// C99:
@ -144,32 +144,32 @@ static inline void loader_platform_thread_delete_mutex(loader_platform_thread_mu
using namespace std;
#endif // __cplusplus
// XGL Library Filenames, Paths, etc.:
// VK Library Filenames, Paths, etc.:
#define PATH_SEPERATOR ';'
#define DIRECTORY_SYMBOL "\\"
#define DRIVER_PATH_REGISTRY_VALUE "XGL_DRIVERS_PATH"
#define LAYERS_PATH_REGISTRY_VALUE "XGL_LAYERS_PATH"
#define LAYER_NAMES_REGISTRY_VALUE "XGL_LAYER_NAMES"
#define DRIVER_PATH_ENV "XGL_DRIVERS_PATH"
#define LAYERS_PATH_ENV "XGL_LAYERS_PATH"
#define LAYER_NAMES_ENV "XGL_LAYER_NAMES"
#ifndef DEFAULT_XGL_DRIVERS_PATH
#define DRIVER_PATH_REGISTRY_VALUE "VK_DRIVERS_PATH"
#define LAYERS_PATH_REGISTRY_VALUE "VK_LAYERS_PATH"
#define LAYER_NAMES_REGISTRY_VALUE "VK_LAYER_NAMES"
#define DRIVER_PATH_ENV "VK_DRIVERS_PATH"
#define LAYERS_PATH_ENV "VK_LAYERS_PATH"
#define LAYER_NAMES_ENV "VK_LAYER_NAMES"
#ifndef DEFAULT_VK_DRIVERS_PATH
// TODO: Is this a good default location?
// Need to search for both 32bit and 64bit ICDs
#define DEFAULT_XGL_DRIVERS_PATH "C:\\Windows\\System32"
#define DEFAULT_VK_DRIVERS_PATH "C:\\Windows\\System32"
// TODO/TBD: Is this an appropriate prefix for Windows?
#define XGL_DRIVER_LIBRARY_PREFIX "XGL_"
#define XGL_DRIVER_LIBRARY_PREFIX_LEN 4
#define VK_DRIVER_LIBRARY_PREFIX "VK_"
#define VK_DRIVER_LIBRARY_PREFIX_LEN 4
// TODO/TBD: Is this an appropriate suffix for Windows?
#define XGL_LAYER_LIBRARY_PREFIX "XGLLayer"
#define XGL_LAYER_LIBRARY_PREFIX_LEN 8
#define XGL_LIBRARY_SUFFIX ".dll"
#define XGL_LIBRARY_SUFFIX_LEN 4
#endif // DEFAULT_XGL_DRIVERS_PATH
#ifndef DEFAULT_XGL_LAYERS_PATH
#define VK_LAYER_LIBRARY_PREFIX "VKLayer"
#define VK_LAYER_LIBRARY_PREFIX_LEN 8
#define VK_LIBRARY_SUFFIX ".dll"
#define VK_LIBRARY_SUFFIX_LEN 4
#endif // DEFAULT_VK_DRIVERS_PATH
#ifndef DEFAULT_VK_LAYERS_PATH
// TODO: Is this a good default location?
#define DEFAULT_XGL_LAYERS_PATH "C:\\Windows\\System32"
#endif // DEFAULT_XGL_LAYERS_PATH
#define DEFAULT_VK_LAYERS_PATH "C:\\Windows\\System32"
#endif // DEFAULT_VK_LAYERS_PATH
// C99:
// Microsoft didn't implement C99 in Visual Studio; but started adding it with

View File

@ -14,7 +14,7 @@ if(NOT ImageMagick_FOUND)
message(FATAL_ERROR "Missing ImageMagick library: sudo apt-get install libmagickwand-dev")
endif()
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DXGL_PROTOTYPES -Wno-sign-compare")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DVK_PROTOTYPES -Wno-sign-compare")
SET(COMMON_CPP
xglrenderframework.cpp
@ -78,28 +78,28 @@ set_target_properties(xglbase
COMPILE_DEFINITIONS "GTEST_LINKED_AS_SHARED_LIBRARY=1")
target_link_libraries(xglbase XGL gtest gtest_main ${TEST_LIBRARIES})
add_executable(xgl_image_tests image_tests.cpp ${COMMON_CPP})
set_target_properties(xgl_image_tests
add_executable(vk_image_tests image_tests.cpp ${COMMON_CPP})
set_target_properties(vk_image_tests
PROPERTIES
COMPILE_DEFINITIONS "GTEST_LINKED_AS_SHARED_LIBRARY=1")
target_link_libraries(xgl_image_tests XGL gtest gtest_main ${TEST_LIBRARIES})
target_link_libraries(vk_image_tests XGL gtest gtest_main ${TEST_LIBRARIES})
add_executable(xgl_render_tests render_tests.cpp ${COMMON_CPP})
set_target_properties(xgl_render_tests
add_executable(vk_render_tests render_tests.cpp ${COMMON_CPP})
set_target_properties(vk_render_tests
PROPERTIES
COMPILE_DEFINITIONS "GTEST_LINKED_AS_SHARED_LIBRARY=1")
target_link_libraries(xgl_render_tests XGL gtest gtest_main ${TEST_LIBRARIES})
target_link_libraries(vk_render_tests XGL gtest gtest_main ${TEST_LIBRARIES})
add_executable(xgl_blit_tests blit_tests.cpp ${COMMON_CPP})
set_target_properties(xgl_blit_tests
add_executable(vk_blit_tests blit_tests.cpp ${COMMON_CPP})
set_target_properties(vk_blit_tests
PROPERTIES
COMPILE_DEFINITIONS "GTEST_LINKED_AS_SHARED_LIBRARY=1")
target_link_libraries(xgl_blit_tests XGL gtest gtest_main ${TEST_LIBRARIES})
target_link_libraries(vk_blit_tests XGL gtest gtest_main ${TEST_LIBRARIES})
add_executable(xgl_layer_validation_tests layer_validation_tests.cpp ${COMMON_CPP})
set_target_properties(xgl_layer_validation_tests
add_executable(vk_layer_validation_tests layer_validation_tests.cpp ${COMMON_CPP})
set_target_properties(vk_layer_validation_tests
PROPERTIES
COMPILE_DEFINITIONS "GTEST_LINKED_AS_SHARED_LIBRARY=1")
target_link_libraries(xgl_layer_validation_tests XGL gtest gtest_main ${TEST_LIBRARIES})
target_link_libraries(vk_layer_validation_tests XGL gtest gtest_main ${TEST_LIBRARIES})
add_subdirectory(gtest-1.7.0)

File diff suppressed because it is too large Load Diff

View File

@ -28,7 +28,7 @@
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
// XGL tests
// VK tests
//
// Copyright (C) 2014 LunarG, Inc.
//
@ -51,16 +51,16 @@
// DEALINGS IN THE SOFTWARE.
// Verify XGL driver initialization
// Verify VK driver initialization
#include <stdlib.h>
#include <stdio.h>
#include <stdbool.h>
#include <string.h>
#include <xgl.h>
#include <vulkan.h>
#include "gtest-1.7.0/include/gtest/gtest.h"
#include "xgltestbinding.h"
#include "vktestbinding.h"
#include "test_common.h"
class XglImageTest : public ::testing::Test {
@ -68,62 +68,62 @@ public:
void CreateImage(uint32_t w, uint32_t h);
void DestroyImage();
void CreateImageView(XGL_IMAGE_VIEW_CREATE_INFO* pCreateInfo,
XGL_IMAGE_VIEW* pView);
void DestroyImageView(XGL_IMAGE_VIEW imageView);
XGL_DEVICE device() {return m_device->obj();}
void CreateImageView(VK_IMAGE_VIEW_CREATE_INFO* pCreateInfo,
VK_IMAGE_VIEW* pView);
void DestroyImageView(VK_IMAGE_VIEW imageView);
VK_DEVICE device() {return m_device->obj();}
protected:
xgl_testing::Device *m_device;
XGL_APPLICATION_INFO app_info;
XGL_PHYSICAL_GPU objs[XGL_MAX_PHYSICAL_GPUS];
vk_testing::Device *m_device;
VK_APPLICATION_INFO app_info;
VK_PHYSICAL_GPU objs[VK_MAX_PHYSICAL_GPUS];
uint32_t gpu_count;
XGL_INSTANCE inst;
XGL_IMAGE m_image;
XGL_GPU_MEMORY *m_image_mem;
VK_INSTANCE inst;
VK_IMAGE m_image;
VK_GPU_MEMORY *m_image_mem;
uint32_t m_num_mem;
virtual void SetUp() {
XGL_RESULT err;
VK_RESULT err;
this->app_info.sType = XGL_STRUCTURE_TYPE_APPLICATION_INFO;
this->app_info.sType = VK_STRUCTURE_TYPE_APPLICATION_INFO;
this->app_info.pNext = NULL;
this->app_info.pAppName = "base";
this->app_info.appVersion = 1;
this->app_info.pEngineName = "unittest";
this->app_info.engineVersion = 1;
this->app_info.apiVersion = XGL_API_VERSION;
XGL_INSTANCE_CREATE_INFO inst_info = {};
inst_info.sType = XGL_STRUCTURE_TYPE_INSTANCE_CREATE_INFO;
this->app_info.apiVersion = VK_API_VERSION;
VK_INSTANCE_CREATE_INFO inst_info = {};
inst_info.sType = VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO;
inst_info.pNext = NULL;
inst_info.pAppInfo = &app_info;
inst_info.pAllocCb = NULL;
inst_info.extensionCount = 0;
inst_info.ppEnabledExtensionNames = NULL;
err = xglCreateInstance(&inst_info, &this->inst);
ASSERT_XGL_SUCCESS(err);
err = xglEnumerateGpus(this->inst, XGL_MAX_PHYSICAL_GPUS,
err = vkCreateInstance(&inst_info, &this->inst);
ASSERT_VK_SUCCESS(err);
err = vkEnumerateGpus(this->inst, VK_MAX_PHYSICAL_GPUS,
&this->gpu_count, objs);
ASSERT_XGL_SUCCESS(err);
ASSERT_VK_SUCCESS(err);
ASSERT_GE(this->gpu_count, 1) << "No GPU available";
this->m_device = new xgl_testing::Device(objs[0]);
this->m_device = new vk_testing::Device(objs[0]);
this->m_device->init();
}
virtual void TearDown() {
xglDestroyInstance(this->inst);
vkDestroyInstance(this->inst);
}
};
void XglImageTest::CreateImage(uint32_t w, uint32_t h)
{
XGL_RESULT err;
VK_RESULT err;
uint32_t mipCount;
size_t size;
XGL_FORMAT fmt;
XGL_FORMAT_PROPERTIES image_fmt;
VK_FORMAT fmt;
VK_FORMAT_PROPERTIES image_fmt;
mipCount = 0;
@ -136,41 +136,41 @@ void XglImageTest::CreateImage(uint32_t w, uint32_t h)
mipCount++;
}
fmt = XGL_FMT_R8G8B8A8_UINT;
fmt = VK_FMT_R8G8B8A8_UINT;
// TODO: Pick known good format rather than just expect common format
/*
* XXX: What should happen if given NULL HANDLE for the pData argument?
* We're not requesting XGL_INFO_TYPE_MEMORY_REQUIREMENTS so there is
* We're not requesting VK_INFO_TYPE_MEMORY_REQUIREMENTS so there is
* an expectation that pData is a valid pointer.
* However, why include a returned size value? That implies that the
* amount of data may vary and that doesn't work well for using a
* fixed structure.
*/
size = sizeof(image_fmt);
err = xglGetFormatInfo(this->device(), fmt,
XGL_INFO_TYPE_FORMAT_PROPERTIES,
err = vkGetFormatInfo(this->device(), fmt,
VK_INFO_TYPE_FORMAT_PROPERTIES,
&size, &image_fmt);
ASSERT_XGL_SUCCESS(err);
ASSERT_VK_SUCCESS(err);
// typedef struct _XGL_IMAGE_CREATE_INFO
// typedef struct _VK_IMAGE_CREATE_INFO
// {
// XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_IMAGE_CREATE_INFO
// VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO
// const void* pNext; // Pointer to next structure.
// XGL_IMAGE_TYPE imageType;
// XGL_FORMAT format;
// XGL_EXTENT3D extent;
// VK_IMAGE_TYPE imageType;
// VK_FORMAT format;
// VK_EXTENT3D extent;
// uint32_t mipLevels;
// uint32_t arraySize;
// uint32_t samples;
// XGL_IMAGE_TILING tiling;
// XGL_FLAGS usage; // XGL_IMAGE_USAGE_FLAGS
// XGL_FLAGS flags; // XGL_IMAGE_CREATE_FLAGS
// } XGL_IMAGE_CREATE_INFO;
// VK_IMAGE_TILING tiling;
// VK_FLAGS usage; // VK_IMAGE_USAGE_FLAGS
// VK_FLAGS flags; // VK_IMAGE_CREATE_FLAGS
// } VK_IMAGE_CREATE_INFO;
XGL_IMAGE_CREATE_INFO imageCreateInfo = {};
imageCreateInfo.sType = XGL_STRUCTURE_TYPE_IMAGE_CREATE_INFO;
imageCreateInfo.imageType = XGL_IMAGE_2D;
VK_IMAGE_CREATE_INFO imageCreateInfo = {};
imageCreateInfo.sType = VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO;
imageCreateInfo.imageType = VK_IMAGE_2D;
imageCreateInfo.format = fmt;
imageCreateInfo.arraySize = 1;
imageCreateInfo.extent.width = w;
@ -178,151 +178,151 @@ void XglImageTest::CreateImage(uint32_t w, uint32_t h)
imageCreateInfo.extent.depth = 1;
imageCreateInfo.mipLevels = mipCount;
imageCreateInfo.samples = 1;
if (image_fmt.linearTilingFeatures & XGL_FORMAT_IMAGE_SHADER_READ_BIT) {
imageCreateInfo.tiling = XGL_LINEAR_TILING;
if (image_fmt.linearTilingFeatures & VK_FORMAT_IMAGE_SHADER_READ_BIT) {
imageCreateInfo.tiling = VK_LINEAR_TILING;
}
else if (image_fmt.optimalTilingFeatures & XGL_FORMAT_IMAGE_SHADER_READ_BIT) {
imageCreateInfo.tiling = XGL_OPTIMAL_TILING;
else if (image_fmt.optimalTilingFeatures & VK_FORMAT_IMAGE_SHADER_READ_BIT) {
imageCreateInfo.tiling = VK_OPTIMAL_TILING;
}
else {
ASSERT_TRUE(false) << "Cannot find supported tiling format - Exiting";
}
// Image usage flags
// typedef enum _XGL_IMAGE_USAGE_FLAGS
// typedef enum _VK_IMAGE_USAGE_FLAGS
// {
// XGL_IMAGE_USAGE_SHADER_ACCESS_READ_BIT = 0x00000001,
// XGL_IMAGE_USAGE_SHADER_ACCESS_WRITE_BIT = 0x00000002,
// XGL_IMAGE_USAGE_COLOR_ATTACHMENT_BIT = 0x00000004,
// XGL_IMAGE_USAGE_DEPTH_STENCIL_BIT = 0x00000008,
// } XGL_IMAGE_USAGE_FLAGS;
imageCreateInfo.usage = XGL_IMAGE_USAGE_SHADER_ACCESS_WRITE_BIT | XGL_IMAGE_USAGE_COLOR_ATTACHMENT_BIT;
// VK_IMAGE_USAGE_SHADER_ACCESS_READ_BIT = 0x00000001,
// VK_IMAGE_USAGE_SHADER_ACCESS_WRITE_BIT = 0x00000002,
// VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT = 0x00000004,
// VK_IMAGE_USAGE_DEPTH_STENCIL_BIT = 0x00000008,
// } VK_IMAGE_USAGE_FLAGS;
imageCreateInfo.usage = VK_IMAGE_USAGE_SHADER_ACCESS_WRITE_BIT | VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT;
// XGL_RESULT XGLAPI xglCreateImage(
// XGL_DEVICE device,
// const XGL_IMAGE_CREATE_INFO* pCreateInfo,
// XGL_IMAGE* pImage);
err = xglCreateImage(device(), &imageCreateInfo, &m_image);
ASSERT_XGL_SUCCESS(err);
// VK_RESULT VKAPI vkCreateImage(
// VK_DEVICE device,
// const VK_IMAGE_CREATE_INFO* pCreateInfo,
// VK_IMAGE* pImage);
err = vkCreateImage(device(), &imageCreateInfo, &m_image);
ASSERT_VK_SUCCESS(err);
XGL_MEMORY_REQUIREMENTS *mem_req;
size_t mem_reqs_size = sizeof(XGL_MEMORY_REQUIREMENTS);
XGL_IMAGE_MEMORY_REQUIREMENTS img_reqs;
size_t img_reqs_size = sizeof(XGL_IMAGE_MEMORY_REQUIREMENTS);
VK_MEMORY_REQUIREMENTS *mem_req;
size_t mem_reqs_size = sizeof(VK_MEMORY_REQUIREMENTS);
VK_IMAGE_MEMORY_REQUIREMENTS img_reqs;
size_t img_reqs_size = sizeof(VK_IMAGE_MEMORY_REQUIREMENTS);
uint32_t num_allocations = 0;
size_t num_alloc_size = sizeof(num_allocations);
XGL_MEMORY_ALLOC_IMAGE_INFO img_alloc = {};
img_alloc.sType = XGL_STRUCTURE_TYPE_MEMORY_ALLOC_IMAGE_INFO;
VK_MEMORY_ALLOC_IMAGE_INFO img_alloc = {};
img_alloc.sType = VK_STRUCTURE_TYPE_MEMORY_ALLOC_IMAGE_INFO;
img_alloc.pNext = NULL;
XGL_MEMORY_ALLOC_INFO mem_info = {};
mem_info.sType = XGL_STRUCTURE_TYPE_MEMORY_ALLOC_INFO;
VK_MEMORY_ALLOC_INFO mem_info = {};
mem_info.sType = VK_STRUCTURE_TYPE_MEMORY_ALLOC_INFO;
mem_info.pNext = &img_alloc;
err = xglGetObjectInfo(m_image, XGL_INFO_TYPE_MEMORY_ALLOCATION_COUNT,
err = vkGetObjectInfo(m_image, VK_INFO_TYPE_MEMORY_ALLOCATION_COUNT,
&num_alloc_size, &num_allocations);
ASSERT_XGL_SUCCESS(err);
ASSERT_VK_SUCCESS(err);
ASSERT_EQ(num_alloc_size,sizeof(num_allocations));
mem_req = (XGL_MEMORY_REQUIREMENTS *) malloc(num_allocations * sizeof(XGL_MEMORY_REQUIREMENTS));
m_image_mem = (XGL_GPU_MEMORY *) malloc(num_allocations * sizeof(XGL_GPU_MEMORY));
mem_req = (VK_MEMORY_REQUIREMENTS *) malloc(num_allocations * sizeof(VK_MEMORY_REQUIREMENTS));
m_image_mem = (VK_GPU_MEMORY *) malloc(num_allocations * sizeof(VK_GPU_MEMORY));
m_num_mem = num_allocations;
err = xglGetObjectInfo(m_image,
XGL_INFO_TYPE_MEMORY_REQUIREMENTS,
err = vkGetObjectInfo(m_image,
VK_INFO_TYPE_MEMORY_REQUIREMENTS,
&mem_reqs_size, mem_req);
ASSERT_XGL_SUCCESS(err);
ASSERT_EQ(mem_reqs_size, num_allocations * sizeof(XGL_MEMORY_REQUIREMENTS));
err = xglGetObjectInfo(m_image,
XGL_INFO_TYPE_IMAGE_MEMORY_REQUIREMENTS,
ASSERT_VK_SUCCESS(err);
ASSERT_EQ(mem_reqs_size, num_allocations * sizeof(VK_MEMORY_REQUIREMENTS));
err = vkGetObjectInfo(m_image,
VK_INFO_TYPE_IMAGE_MEMORY_REQUIREMENTS,
&img_reqs_size, &img_reqs);
ASSERT_XGL_SUCCESS(err);
ASSERT_EQ(img_reqs_size, sizeof(XGL_IMAGE_MEMORY_REQUIREMENTS));
ASSERT_VK_SUCCESS(err);
ASSERT_EQ(img_reqs_size, sizeof(VK_IMAGE_MEMORY_REQUIREMENTS));
img_alloc.usage = img_reqs.usage;
img_alloc.formatClass = img_reqs.formatClass;
img_alloc.samples = img_reqs.samples;
for (uint32_t i = 0; i < num_allocations; i ++) {
ASSERT_NE(0, mem_req[i].size) << "xglGetObjectInfo (Image): Failed - expect images to require memory";
ASSERT_NE(0, mem_req[i].size) << "vkGetObjectInfo (Image): Failed - expect images to require memory";
mem_info.allocationSize = mem_req[i].size;
mem_info.memProps = XGL_MEMORY_PROPERTY_SHAREABLE_BIT;
mem_info.memType = XGL_MEMORY_TYPE_IMAGE;
mem_info.memPriority = XGL_MEMORY_PRIORITY_NORMAL;
mem_info.memProps = VK_MEMORY_PROPERTY_SHAREABLE_BIT;
mem_info.memType = VK_MEMORY_TYPE_IMAGE;
mem_info.memPriority = VK_MEMORY_PRIORITY_NORMAL;
/* allocate memory */
err = xglAllocMemory(device(), &mem_info, &m_image_mem[i]);
ASSERT_XGL_SUCCESS(err);
err = vkAllocMemory(device(), &mem_info, &m_image_mem[i]);
ASSERT_VK_SUCCESS(err);
/* bind memory */
err = xglBindObjectMemory(m_image, i, m_image_mem[i], 0);
ASSERT_XGL_SUCCESS(err);
err = vkBindObjectMemory(m_image, i, m_image_mem[i], 0);
ASSERT_VK_SUCCESS(err);
}
}
void XglImageTest::DestroyImage()
{
XGL_RESULT err;
VK_RESULT err;
// All done with image memory, clean up
ASSERT_XGL_SUCCESS(xglBindObjectMemory(m_image, 0, XGL_NULL_HANDLE, 0));
ASSERT_VK_SUCCESS(vkBindObjectMemory(m_image, 0, VK_NULL_HANDLE, 0));
for (uint32_t i = 0 ; i < m_num_mem; i++) {
err = xglFreeMemory(m_image_mem[i]);
ASSERT_XGL_SUCCESS(err);
err = vkFreeMemory(m_image_mem[i]);
ASSERT_VK_SUCCESS(err);
}
ASSERT_XGL_SUCCESS(xglDestroyObject(m_image));
ASSERT_VK_SUCCESS(vkDestroyObject(m_image));
}
void XglImageTest::CreateImageView(XGL_IMAGE_VIEW_CREATE_INFO *pCreateInfo,
XGL_IMAGE_VIEW *pView)
void XglImageTest::CreateImageView(VK_IMAGE_VIEW_CREATE_INFO *pCreateInfo,
VK_IMAGE_VIEW *pView)
{
pCreateInfo->image = this->m_image;
ASSERT_XGL_SUCCESS(xglCreateImageView(device(), pCreateInfo, pView));
ASSERT_VK_SUCCESS(vkCreateImageView(device(), pCreateInfo, pView));
}
void XglImageTest::DestroyImageView(XGL_IMAGE_VIEW imageView)
void XglImageTest::DestroyImageView(VK_IMAGE_VIEW imageView)
{
ASSERT_XGL_SUCCESS(xglDestroyObject(imageView));
ASSERT_VK_SUCCESS(vkDestroyObject(imageView));
}
TEST_F(XglImageTest, CreateImageViewTest) {
XGL_FORMAT fmt;
XGL_IMAGE_VIEW imageView;
VK_FORMAT fmt;
VK_IMAGE_VIEW imageView;
fmt = XGL_FMT_R8G8B8A8_UINT;
fmt = VK_FMT_R8G8B8A8_UINT;
CreateImage(512, 256);
// typedef struct _XGL_IMAGE_VIEW_CREATE_INFO
// typedef struct _VK_IMAGE_VIEW_CREATE_INFO
// {
// XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO
// VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO
// const void* pNext; // Pointer to next structure
// XGL_IMAGE image;
// XGL_IMAGE_VIEW_TYPE viewType;
// XGL_FORMAT format;
// XGL_CHANNEL_MAPPING channels;
// XGL_IMAGE_SUBRESOURCE_RANGE subresourceRange;
// VK_IMAGE image;
// VK_IMAGE_VIEW_TYPE viewType;
// VK_FORMAT format;
// VK_CHANNEL_MAPPING channels;
// VK_IMAGE_SUBRESOURCE_RANGE subresourceRange;
// float minLod;
// } XGL_IMAGE_VIEW_CREATE_INFO;
XGL_IMAGE_VIEW_CREATE_INFO viewInfo = {};
viewInfo.sType = XGL_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO;
viewInfo.viewType = XGL_IMAGE_VIEW_2D;
// } VK_IMAGE_VIEW_CREATE_INFO;
VK_IMAGE_VIEW_CREATE_INFO viewInfo = {};
viewInfo.sType = VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO;
viewInfo.viewType = VK_IMAGE_VIEW_2D;
viewInfo.format = fmt;
viewInfo.channels.r = XGL_CHANNEL_SWIZZLE_R;
viewInfo.channels.g = XGL_CHANNEL_SWIZZLE_G;
viewInfo.channels.b = XGL_CHANNEL_SWIZZLE_B;
viewInfo.channels.a = XGL_CHANNEL_SWIZZLE_A;
viewInfo.channels.r = VK_CHANNEL_SWIZZLE_R;
viewInfo.channels.g = VK_CHANNEL_SWIZZLE_G;
viewInfo.channels.b = VK_CHANNEL_SWIZZLE_B;
viewInfo.channels.a = VK_CHANNEL_SWIZZLE_A;
viewInfo.subresourceRange.baseArraySlice = 0;
viewInfo.subresourceRange.arraySize = 1;
viewInfo.subresourceRange.baseMipLevel = 0;
viewInfo.subresourceRange.mipLevels = 1;
viewInfo.subresourceRange.aspect = XGL_IMAGE_ASPECT_COLOR;
viewInfo.subresourceRange.aspect = VK_IMAGE_ASPECT_COLOR;
// XGL_RESULT XGLAPI xglCreateImageView(
// XGL_DEVICE device,
// const XGL_IMAGE_VIEW_CREATE_INFO* pCreateInfo,
// XGL_IMAGE_VIEW* pView);
// VK_RESULT VKAPI vkCreateImageView(
// VK_DEVICE device,
// const VK_IMAGE_VIEW_CREATE_INFO* pCreateInfo,
// VK_IMAGE_VIEW* pView);
CreateImageView(&viewInfo, &imageView);
@ -331,6 +331,6 @@ TEST_F(XglImageTest, CreateImageViewTest) {
int main(int argc, char **argv) {
::testing::InitGoogleTest(&argc, argv);
xgl_testing::set_error_callback(test_error_callback);
vk_testing::set_error_callback(test_error_callback);
return RUN_ALL_TESTS();
}

File diff suppressed because it is too large Load Diff

View File

@ -1,6 +1,6 @@
#!/usr/bin/env python3
#
# XGL
# VK
#
# Copyright (C) 2014 LunarG, Inc.
#
@ -49,7 +49,7 @@ expected_errors = {'XglRenderTest.CubeWithVertexFetchAndMVP' : ['{OBJTRACK}ERROR
'{OBJTRACK}ERROR : OBJ ERROR : GPU_MEMORY',
'{OBJTRACK}ERROR : OBJ ERROR : IMAGE'],
'XglTest.Fence' : ['{OBJTRACK}ERROR : OBJECT VALIDATION WARNING: FENCE'],
#'XglRenderTest.XGLTriangle_OutputLocation' : ['{OBJTRACK}ERROR : xglQueueSubmit Memory reference count'],
#'XglRenderTest.VKTriangle_OutputLocation' : ['{OBJTRACK}ERROR : vkQueueSubmit Memory reference count'],
'XglRenderTest.TriangleWithVertexFetch' : ['{OBJTRACK}ERROR : OBJ ERROR : CMD_BUFFER'],
'XglRenderTest.TriangleMRT' : ['{OBJTRACK}ERROR : OBJ ERROR : CMD_BUFFER'],
'XglRenderTest.QuadWithIndexedVertexFetch' : ['{OBJTRACK}ERROR : OBJ ERROR : CMD_BUFFER', '{OBJTRACK}ERROR : OBJ ERROR : CMD_BUFFER'],

File diff suppressed because it is too large Load Diff

View File

@ -2,16 +2,16 @@
#
# Run all the regression tests
# xglbase tests that basic XGL calls are working (don't return an error).
./xglbase
# vkbase tests that basic VK calls are working (don't return an error).
./vkbase
# xgl_blit_tests test Fill/Copy Memory, Clears, CopyMemoryToImage
./xgl_blit_tests
# vk_blit_tests test Fill/Copy Memory, Clears, CopyMemoryToImage
./vk_blit_tests
# xgl_image_tests check that image can be allocated and bound.
./xgl_image_tests
# vk_image_tests check that image can be allocated and bound.
./vk_image_tests
#xgl_render_tests tests a variety of features using rendered images
#vk_render_tests tests a variety of features using rendered images
# --compare-images will cause the test to check the resulting image against
# a saved "golden" image and will report an error if there is any difference
./xgl_render_tests --compare-images
./vk_render_tests --compare-images

View File

@ -3,12 +3,12 @@
# Run all the regression tests with validation layers enabled
# enable layers
export LIBXGL_LAYER_NAMES=DrawState:MemTracker:ParamChecker:ObjectTracker
export LIBVK_LAYER_NAMES=DrawState:MemTracker:ParamChecker:ObjectTracker
# Save any existing settings file
RESTORE_SETTINGS="false"
SETTINGS_NAME="xgl_layer_settings.txt"
SETTINGS_NAME="vk_layer_settings.txt"
TMP_SETTINGS_NAME="xls.txt"
OUTPUT_LEVEL="XGL_DBG_LAYER_LEVEL_ERROR"
OUTPUT_LEVEL="VK_DBG_LAYER_LEVEL_ERROR"
if [ -f $SETTINGS_NAME ]; then
echo Saving $SETTINGS_NAME to $TMP_SETTINGS_NAME
RESTORE_SETTINGS="true"
@ -22,19 +22,19 @@ echo "DrawStateReportLevel = $OUTPUT_LEVEL" >> $SETTINGS_NAME
echo "ObjectTrackerReportLevel = $OUTPUT_LEVEL" >> $SETTINGS_NAME
echo "ParamCheckerReportLevel = $OUTPUT_LEVEL" >> $SETTINGS_NAME
# xglbase tests that basic XGL calls are working (don't return an error).
./xglbase
# vkbase tests that basic VK calls are working (don't return an error).
./vkbase
# xgl_blit_tests test Fill/Copy Memory, Clears, CopyMemoryToImage
./xgl_blit_tests
# vk_blit_tests test Fill/Copy Memory, Clears, CopyMemoryToImage
./vk_blit_tests
# xgl_image_tests check that image can be allocated and bound.
./xgl_image_tests
# vk_image_tests check that image can be allocated and bound.
./vk_image_tests
#xgl_render_tests tests a variety of features using rendered images
#vk_render_tests tests a variety of features using rendered images
# --compare-images will cause the test to check the resulting image against
# a saved "golden" image and will report an error if there is any difference
./xgl_render_tests --compare-images
./vk_render_tests --compare-images
if [ "$RESTORE_SETTINGS" = "true" ]; then
echo Restore $SETTINGS_NAME from $TMP_SETTINGS_NAME

View File

@ -7,58 +7,58 @@
#include <string.h>
#include <assert.h>
#include <xgl.h>
#include <vulkan.h>
#include "gtest/gtest.h"
#include "gtest-1.7.0/include/gtest/gtest.h"
#include "xgltestbinding.h"
#include "vktestbinding.h"
#define ASSERT_XGL_SUCCESS(err) ASSERT_EQ(XGL_SUCCESS, err) << xgl_result_string(err)
#define ASSERT_VK_SUCCESS(err) ASSERT_EQ(VK_SUCCESS, err) << vk_result_string(err)
static inline const char *xgl_result_string(XGL_RESULT err)
static inline const char *vk_result_string(VK_RESULT err)
{
switch (err) {
#define STR(r) case r: return #r
STR(XGL_SUCCESS);
STR(XGL_UNSUPPORTED);
STR(XGL_NOT_READY);
STR(XGL_TIMEOUT);
STR(XGL_EVENT_SET);
STR(XGL_EVENT_RESET);
STR(XGL_ERROR_UNKNOWN);
STR(XGL_ERROR_UNAVAILABLE);
STR(XGL_ERROR_INITIALIZATION_FAILED);
STR(XGL_ERROR_OUT_OF_MEMORY);
STR(XGL_ERROR_OUT_OF_GPU_MEMORY);
STR(XGL_ERROR_DEVICE_ALREADY_CREATED);
STR(XGL_ERROR_DEVICE_LOST);
STR(XGL_ERROR_INVALID_POINTER);
STR(XGL_ERROR_INVALID_VALUE);
STR(XGL_ERROR_INVALID_HANDLE);
STR(XGL_ERROR_INVALID_ORDINAL);
STR(XGL_ERROR_INVALID_MEMORY_SIZE);
STR(XGL_ERROR_INVALID_EXTENSION);
STR(XGL_ERROR_INVALID_FLAGS);
STR(XGL_ERROR_INVALID_ALIGNMENT);
STR(XGL_ERROR_INVALID_FORMAT);
STR(XGL_ERROR_INVALID_IMAGE);
STR(XGL_ERROR_INVALID_DESCRIPTOR_SET_DATA);
STR(XGL_ERROR_INVALID_QUEUE_TYPE);
STR(XGL_ERROR_INVALID_OBJECT_TYPE);
STR(XGL_ERROR_UNSUPPORTED_SHADER_IL_VERSION);
STR(XGL_ERROR_BAD_SHADER_CODE);
STR(XGL_ERROR_BAD_PIPELINE_DATA);
STR(XGL_ERROR_TOO_MANY_MEMORY_REFERENCES);
STR(XGL_ERROR_NOT_MAPPABLE);
STR(XGL_ERROR_MEMORY_MAP_FAILED);
STR(XGL_ERROR_MEMORY_UNMAP_FAILED);
STR(XGL_ERROR_INCOMPATIBLE_DEVICE);
STR(XGL_ERROR_INCOMPATIBLE_DRIVER);
STR(XGL_ERROR_INCOMPLETE_COMMAND_BUFFER);
STR(XGL_ERROR_BUILDING_COMMAND_BUFFER);
STR(XGL_ERROR_MEMORY_NOT_BOUND);
STR(XGL_ERROR_INCOMPATIBLE_QUEUE);
STR(XGL_ERROR_NOT_SHAREABLE);
STR(VK_SUCCESS);
STR(VK_UNSUPPORTED);
STR(VK_NOT_READY);
STR(VK_TIMEOUT);
STR(VK_EVENT_SET);
STR(VK_EVENT_RESET);
STR(VK_ERROR_UNKNOWN);
STR(VK_ERROR_UNAVAILABLE);
STR(VK_ERROR_INITIALIZATION_FAILED);
STR(VK_ERROR_OUT_OF_MEMORY);
STR(VK_ERROR_OUT_OF_GPU_MEMORY);
STR(VK_ERROR_DEVICE_ALREADY_CREATED);
STR(VK_ERROR_DEVICE_LOST);
STR(VK_ERROR_INVALID_POINTER);
STR(VK_ERROR_INVALID_VALUE);
STR(VK_ERROR_INVALID_HANDLE);
STR(VK_ERROR_INVALID_ORDINAL);
STR(VK_ERROR_INVALID_MEMORY_SIZE);
STR(VK_ERROR_INVALID_EXTENSION);
STR(VK_ERROR_INVALID_FLAGS);
STR(VK_ERROR_INVALID_ALIGNMENT);
STR(VK_ERROR_INVALID_FORMAT);
STR(VK_ERROR_INVALID_IMAGE);
STR(VK_ERROR_INVALID_DESCRIPTOR_SET_DATA);
STR(VK_ERROR_INVALID_QUEUE_TYPE);
STR(VK_ERROR_INVALID_OBJECT_TYPE);
STR(VK_ERROR_UNSUPPORTED_SHADER_IL_VERSION);
STR(VK_ERROR_BAD_SHADER_CODE);
STR(VK_ERROR_BAD_PIPELINE_DATA);
STR(VK_ERROR_TOO_MANY_MEMORY_REFERENCES);
STR(VK_ERROR_NOT_MAPPABLE);
STR(VK_ERROR_MEMORY_MAP_FAILED);
STR(VK_ERROR_MEMORY_UNMAP_FAILED);
STR(VK_ERROR_INCOMPATIBLE_DEVICE);
STR(VK_ERROR_INCOMPATIBLE_DRIVER);
STR(VK_ERROR_INCOMPLETE_COMMAND_BUFFER);
STR(VK_ERROR_BUILDING_COMMAND_BUFFER);
STR(VK_ERROR_MEMORY_NOT_BOUND);
STR(VK_ERROR_INCOMPATIBLE_QUEUE);
STR(VK_ERROR_NOT_SHAREABLE);
#undef STR
default: return "UNKNOWN_RESULT";
}

File diff suppressed because it is too large Load Diff

View File

@ -1,417 +0,0 @@
/*
* XGL Tests
*
* Copyright (C) 2014 LunarG, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included
* in all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* Authors:
* Courtney Goeltzenleuchter <courtney@lunarg.com>
*/
#ifndef XGLRENDERFRAMEWORK_H
#define XGLRENDERFRAMEWORK_H
#include "xgltestframework.h"
class XglDevice : public xgl_testing::Device
{
public:
XglDevice(uint32_t id, XGL_PHYSICAL_GPU obj);
XGL_DEVICE device() { return obj(); }
void get_device_queue();
uint32_t id;
XGL_PHYSICAL_GPU_PROPERTIES props;
const XGL_PHYSICAL_GPU_QUEUE_PROPERTIES *queue_props;
XGL_QUEUE m_queue;
};
class XglMemoryRefManager
{
public:
void AddMemoryRefs(xgl_testing::Object &xglObject);
void AddMemoryRefs(vector<XGL_GPU_MEMORY> mem);
void EmitAddMemoryRefs(XGL_QUEUE queue);
void EmitRemoveMemoryRefs(XGL_QUEUE queue);
vector<XGL_GPU_MEMORY> mem_refs() const;
protected:
vector<XGL_GPU_MEMORY> mem_refs_;
};
class XglDepthStencilObj : public xgl_testing::Image
{
public:
XglDepthStencilObj();
void Init(XglDevice *device, int32_t width, int32_t height);
bool Initialized();
XGL_DEPTH_STENCIL_BIND_INFO* BindInfo();
protected:
XglDevice *m_device;
bool m_initialized;
xgl_testing::DepthStencilView m_depthStencilView;
XGL_FORMAT m_depth_stencil_fmt;
XGL_DEPTH_STENCIL_BIND_INFO m_depthStencilBindInfo;
};
class XglRenderFramework : public XglTestFramework
{
public:
XglRenderFramework();
~XglRenderFramework();
XGL_DEVICE device() {return m_device->device();}
XGL_PHYSICAL_GPU gpu() {return objs[0];}
XGL_RENDER_PASS renderPass() {return m_renderPass;}
XGL_FRAMEBUFFER framebuffer() {return m_framebuffer;}
void InitViewport(float width, float height);
void InitViewport();
void InitRenderTarget();
void InitRenderTarget(uint32_t targets);
void InitRenderTarget(XGL_DEPTH_STENCIL_BIND_INFO *dsBinding);
void InitRenderTarget(uint32_t targets, XGL_DEPTH_STENCIL_BIND_INFO *dsBinding);
void InitFramework();
void ShutdownFramework();
void InitState();
protected:
XGL_APPLICATION_INFO app_info;
XGL_INSTANCE inst;
XGL_PHYSICAL_GPU objs[XGL_MAX_PHYSICAL_GPUS];
uint32_t gpu_count;
XglDevice *m_device;
XGL_CMD_BUFFER m_cmdBuffer;
XGL_RENDER_PASS m_renderPass;
XGL_FRAMEBUFFER m_framebuffer;
XGL_DYNAMIC_RS_STATE_OBJECT m_stateRaster;
XGL_DYNAMIC_CB_STATE_OBJECT m_colorBlend;
XGL_DYNAMIC_VP_STATE_OBJECT m_stateViewport;
XGL_DYNAMIC_DS_STATE_OBJECT m_stateDepthStencil;
vector<XglImage*> m_renderTargets;
float m_width, m_height;
XGL_FORMAT m_render_target_fmt;
XGL_FORMAT m_depth_stencil_fmt;
XGL_COLOR_ATTACHMENT_BIND_INFO m_colorBindings[8];
XGL_CLEAR_COLOR m_clear_color;
float m_depth_clear_color;
uint32_t m_stencil_clear_color;
XglDepthStencilObj *m_depthStencil;
XglMemoryRefManager m_mem_ref_mgr;
/*
* SetUp and TearDown are called by the Google Test framework
* to initialize a test framework based on this class.
*/
virtual void SetUp() {
this->app_info.sType = XGL_STRUCTURE_TYPE_APPLICATION_INFO;
this->app_info.pNext = NULL;
this->app_info.pAppName = "base";
this->app_info.appVersion = 1;
this->app_info.pEngineName = "unittest";
this->app_info.engineVersion = 1;
this->app_info.apiVersion = XGL_API_VERSION;
InitFramework();
}
virtual void TearDown() {
ShutdownFramework();
}
};
class XglDescriptorSetObj;
class XglIndexBufferObj;
class XglConstantBufferObj;
class XglPipelineObj;
class XglDescriptorSetObj;
class XglCommandBufferObj : public xgl_testing::CmdBuffer
{
public:
XglCommandBufferObj(XglDevice *device);
XGL_CMD_BUFFER GetBufferHandle();
XGL_RESULT BeginCommandBuffer();
XGL_RESULT BeginCommandBuffer(XGL_CMD_BUFFER_BEGIN_INFO *pInfo);
XGL_RESULT BeginCommandBuffer(XGL_RENDER_PASS renderpass_obj, XGL_FRAMEBUFFER framebuffer_obj);
XGL_RESULT EndCommandBuffer();
void PipelineBarrier(XGL_PIPELINE_BARRIER *barrierPtr);
void AddRenderTarget(XglImage *renderTarget);
void AddDepthStencil();
void ClearAllBuffers(XGL_CLEAR_COLOR clear_color, float depth_clear_color, uint32_t stencil_clear_color, XglDepthStencilObj *depthStencilObj);
void PrepareAttachments();
void AddMemoryRefs(xgl_testing::Object &xglObject);
void AddMemoryRefs(uint32_t ref_count, const XGL_GPU_MEMORY *mem);
void AddMemoryRefs(vector<xgl_testing::Object *> images);
void BindPipeline(XglPipelineObj &pipeline);
void BindDescriptorSet(XglDescriptorSetObj &descriptorSet);
void BindVertexBuffer(XglConstantBufferObj *vertexBuffer, uint32_t offset, uint32_t binding);
void BindIndexBuffer(XglIndexBufferObj *indexBuffer, uint32_t offset);
void BindStateObject(XGL_STATE_BIND_POINT stateBindPoint, XGL_DYNAMIC_STATE_OBJECT stateObject);
void BeginRenderPass(XGL_RENDER_PASS renderpass, XGL_FRAMEBUFFER framebuffer);
void EndRenderPass(XGL_RENDER_PASS renderpass);
void Draw(uint32_t firstVertex, uint32_t vertexCount, uint32_t firstInstance, uint32_t instanceCount);
void DrawIndexed(uint32_t firstIndex, uint32_t indexCount, int32_t vertexOffset, uint32_t firstInstance, uint32_t instanceCount);
void QueueCommandBuffer();
void QueueCommandBuffer(XGL_FENCE fence);
XglMemoryRefManager mem_ref_mgr;
protected:
XglDevice *m_device;
vector<XglImage*> m_renderTargets;
};
class XglConstantBufferObj : public xgl_testing::Buffer
{
public:
XglConstantBufferObj(XglDevice *device);
XglConstantBufferObj(XglDevice *device, int constantCount, int constantSize, const void* data);
~XglConstantBufferObj();
void BufferMemoryBarrier(
XGL_FLAGS outputMask =
XGL_MEMORY_OUTPUT_CPU_WRITE_BIT |
XGL_MEMORY_OUTPUT_SHADER_WRITE_BIT |
XGL_MEMORY_OUTPUT_COLOR_ATTACHMENT_BIT |
XGL_MEMORY_OUTPUT_DEPTH_STENCIL_ATTACHMENT_BIT |
XGL_MEMORY_OUTPUT_COPY_BIT,
XGL_FLAGS inputMask =
XGL_MEMORY_INPUT_CPU_READ_BIT |
XGL_MEMORY_INPUT_INDIRECT_COMMAND_BIT |
XGL_MEMORY_INPUT_INDEX_FETCH_BIT |
XGL_MEMORY_INPUT_VERTEX_ATTRIBUTE_FETCH_BIT |
XGL_MEMORY_INPUT_UNIFORM_READ_BIT |
XGL_MEMORY_INPUT_SHADER_READ_BIT |
XGL_MEMORY_INPUT_COLOR_ATTACHMENT_BIT |
XGL_MEMORY_INPUT_DEPTH_STENCIL_ATTACHMENT_BIT |
XGL_MEMORY_INPUT_COPY_BIT);
void Bind(XGL_CMD_BUFFER cmdBuffer, XGL_GPU_SIZE offset, uint32_t binding);
XGL_BUFFER_VIEW_ATTACH_INFO m_bufferViewInfo;
protected:
XglDevice *m_device;
xgl_testing::BufferView m_bufferView;
int m_numVertices;
int m_stride;
XglCommandBufferObj *m_commandBuffer;
xgl_testing::Fence m_fence;
};
class XglIndexBufferObj : public XglConstantBufferObj
{
public:
XglIndexBufferObj(XglDevice *device);
void CreateAndInitBuffer(int numIndexes, XGL_INDEX_TYPE dataFormat, const void* data);
void Bind(XGL_CMD_BUFFER cmdBuffer, XGL_GPU_SIZE offset);
XGL_INDEX_TYPE GetIndexType();
protected:
XGL_INDEX_TYPE m_indexType;
};
class XglImage : public xgl_testing::Image
{
public:
XglImage(XglDevice *dev);
bool IsCompatible(XGL_FLAGS usage, XGL_FLAGS features);
public:
void init(uint32_t w, uint32_t h,
XGL_FORMAT fmt, XGL_FLAGS usage,
XGL_IMAGE_TILING tiling=XGL_LINEAR_TILING);
// void clear( CommandBuffer*, uint32_t[4] );
void layout( XGL_IMAGE_LAYOUT layout )
{
m_imageInfo.layout = layout;
}
XGL_GPU_MEMORY memory() const
{
const std::vector<XGL_GPU_MEMORY> mems = memories();
return mems.empty() ? XGL_NULL_HANDLE : mems[0];
}
void ImageMemoryBarrier(XglCommandBufferObj *cmd,
XGL_IMAGE_ASPECT aspect,
XGL_FLAGS output_mask,
XGL_FLAGS input_mask,
XGL_IMAGE_LAYOUT image_layout);
XGL_RESULT CopyImage(XglImage &src_image);
XGL_IMAGE image() const
{
return obj();
}
XGL_COLOR_ATTACHMENT_VIEW targetView()
{
if (!m_targetView.initialized())
{
XGL_COLOR_ATTACHMENT_VIEW_CREATE_INFO createView = {
XGL_STRUCTURE_TYPE_COLOR_ATTACHMENT_VIEW_CREATE_INFO,
XGL_NULL_HANDLE,
obj(),
XGL_FMT_B8G8R8A8_UNORM,
0,
0,
1
};
m_targetView.init(*m_device, createView);
}
return m_targetView.obj();
}
void SetLayout(XglCommandBufferObj *cmd_buf, XGL_IMAGE_ASPECT aspect, XGL_IMAGE_LAYOUT image_layout);
void SetLayout(XGL_IMAGE_ASPECT aspect, XGL_IMAGE_LAYOUT image_layout);
XGL_IMAGE_LAYOUT layout() const
{
return ( XGL_IMAGE_LAYOUT )m_imageInfo.layout;
}
uint32_t width() const
{
return extent().width;
}
uint32_t height() const
{
return extent().height;
}
XglDevice* device() const
{
return m_device;
}
XGL_RESULT MapMemory(void** ptr);
XGL_RESULT UnmapMemory();
protected:
XglDevice *m_device;
xgl_testing::ColorAttachmentView m_targetView;
XGL_IMAGE_VIEW_ATTACH_INFO m_imageInfo;
};
class XglTextureObj : public XglImage
{
public:
XglTextureObj(XglDevice *device, uint32_t *colors = NULL);
XGL_IMAGE_VIEW_ATTACH_INFO m_textureViewInfo;
protected:
XglDevice *m_device;
xgl_testing::ImageView m_textureView;
XGL_GPU_SIZE m_rowPitch;
};
class XglSamplerObj : public xgl_testing::Sampler
{
public:
XglSamplerObj(XglDevice *device);
protected:
XglDevice *m_device;
};
class XglDescriptorSetObj : public xgl_testing::DescriptorPool
{
public:
XglDescriptorSetObj(XglDevice *device);
~XglDescriptorSetObj();
int AppendDummy();
int AppendBuffer(XGL_DESCRIPTOR_TYPE type, XglConstantBufferObj &constantBuffer);
int AppendSamplerTexture(XglSamplerObj* sampler, XglTextureObj* texture);
void CreateXGLDescriptorSet(XglCommandBufferObj *cmdBuffer);
XGL_DESCRIPTOR_SET GetDescriptorSetHandle() const;
XGL_DESCRIPTOR_SET_LAYOUT_CHAIN GetLayoutChain() const;
XglMemoryRefManager mem_ref_mgr;
protected:
XglDevice *m_device;
vector<XGL_DESCRIPTOR_TYPE_COUNT> m_type_counts;
int m_nextSlot;
vector<XGL_UPDATE_BUFFERS> m_updateBuffers;
vector<XGL_SAMPLER_IMAGE_VIEW_INFO> m_samplerTextureInfo;
vector<XGL_UPDATE_SAMPLER_TEXTURES> m_updateSamplerTextures;
xgl_testing::DescriptorSetLayout m_layout;
xgl_testing::DescriptorSetLayoutChain m_layout_chain;
xgl_testing::DescriptorSet *m_set;
};
class XglShaderObj : public xgl_testing::Shader
{
public:
XglShaderObj(XglDevice *device, const char * shaderText, XGL_PIPELINE_SHADER_STAGE stage, XglRenderFramework *framework);
XGL_PIPELINE_SHADER_STAGE_CREATE_INFO* GetStageCreateInfo();
protected:
XGL_PIPELINE_SHADER_STAGE_CREATE_INFO stage_info;
XGL_PIPELINE_SHADER_STAGE m_stage;
XglDevice *m_device;
};
class XglPipelineObj : public xgl_testing::Pipeline
{
public:
XglPipelineObj(XglDevice *device);
void AddShader(XglShaderObj* shaderObj);
void AddVertexInputAttribs(XGL_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION* vi_attrib, int count);
void AddVertexInputBindings(XGL_VERTEX_INPUT_BINDING_DESCRIPTION* vi_binding, int count);
void AddVertexDataBuffer(XglConstantBufferObj* vertexDataBuffer, int binding);
void AddColorAttachment(uint32_t binding, const XGL_PIPELINE_CB_ATTACHMENT_STATE *att);
void SetDepthStencil(XGL_PIPELINE_DS_STATE_CREATE_INFO *);
void CreateXGLPipeline(XglDescriptorSetObj &descriptorSet);
protected:
XGL_PIPELINE_VERTEX_INPUT_CREATE_INFO m_vi_state;
XGL_PIPELINE_IA_STATE_CREATE_INFO m_ia_state;
XGL_PIPELINE_RS_STATE_CREATE_INFO m_rs_state;
XGL_PIPELINE_CB_STATE_CREATE_INFO m_cb_state;
XGL_PIPELINE_DS_STATE_CREATE_INFO m_ds_state;
XGL_PIPELINE_MS_STATE_CREATE_INFO m_ms_state;
XglDevice *m_device;
vector<XglShaderObj*> m_shaderObjs;
vector<XglConstantBufferObj*> m_vertexBufferObjs;
vector<int> m_vertexBufferBindings;
vector<XGL_PIPELINE_CB_ATTACHMENT_STATE> m_colorAttachments;
int m_vertexBufferCount;
};
#endif // XGLRENDERFRAMEWORK_H

File diff suppressed because it is too large Load Diff

View File

@ -1,891 +0,0 @@
// XGL tests
//
// Copyright (C) 2014 LunarG, Inc.
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included
// in all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
// THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
#ifndef XGLTESTBINDING_H
#define XGLTESTBINDING_H
#include <vector>
#include "xgl.h"
namespace xgl_testing {
typedef void (*ErrorCallback)(const char *expr, const char *file, unsigned int line, const char *function);
void set_error_callback(ErrorCallback callback);
class PhysicalGpu;
class BaseObject;
class Object;
class DynamicStateObject;
class Device;
class Queue;
class GpuMemory;
class Fence;
class Semaphore;
class Event;
class QueryPool;
class Buffer;
class BufferView;
class Image;
class ImageView;
class ColorAttachmentView;
class DepthStencilView;
class Shader;
class Pipeline;
class PipelineDelta;
class Sampler;
class DescriptorSetLayout;
class DescriptorSetLayoutChain;
class DescriptorSetPool;
class DescriptorSet;
class DynamicVpStateObject;
class DynamicRsStateObject;
class DynamicMsaaStateObject;
class DynamicCbStateObject;
class DynamicDsStateObject;
class CmdBuffer;
class PhysicalGpu {
public:
explicit PhysicalGpu(XGL_PHYSICAL_GPU gpu) : gpu_(gpu) {}
const XGL_PHYSICAL_GPU &obj() const { return gpu_; }
// xglGetGpuInfo()
XGL_PHYSICAL_GPU_PROPERTIES properties() const;
XGL_PHYSICAL_GPU_PERFORMANCE performance() const;
XGL_PHYSICAL_GPU_MEMORY_PROPERTIES memory_properties() const;
std::vector<XGL_PHYSICAL_GPU_QUEUE_PROPERTIES> queue_properties() const;
// xglGetProcAddr()
void *get_proc(const char *name) const { return xglGetProcAddr(gpu_, name); }
// xglGetExtensionSupport()
bool has_extension(const char *ext) const { return (xglGetExtensionSupport(gpu_, ext) == XGL_SUCCESS); }
std::vector<const char *> extensions() const;
// xglEnumerateLayers()
std::vector<const char *> layers(std::vector<char> &buf) const;
// xglGetMultiGpuCompatibility()
XGL_GPU_COMPATIBILITY_INFO compatibility(const PhysicalGpu &other) const;
private:
XGL_PHYSICAL_GPU gpu_;
};
class BaseObject {
public:
const XGL_BASE_OBJECT &obj() const { return obj_; }
bool initialized() const { return (obj_ != XGL_NULL_HANDLE); }
// xglGetObjectInfo()
uint32_t memory_allocation_count() const;
std::vector<XGL_MEMORY_REQUIREMENTS> memory_requirements() const;
protected:
explicit BaseObject() : obj_(XGL_NULL_HANDLE), own_obj_(false) {}
explicit BaseObject(XGL_BASE_OBJECT obj) : obj_(XGL_NULL_HANDLE), own_obj_(false) { init(obj); }
void init(XGL_BASE_OBJECT obj, bool own);
void init(XGL_BASE_OBJECT obj) { init(obj, true); }
void reinit(XGL_BASE_OBJECT obj, bool own);
void reinit(XGL_BASE_OBJECT obj) { reinit(obj, true); }
bool own() const { return own_obj_; }
private:
// base objects are non-copyable
BaseObject(const BaseObject &);
BaseObject &operator=(const BaseObject &);
XGL_BASE_OBJECT obj_;
bool own_obj_;
};
class Object : public BaseObject {
public:
const XGL_OBJECT &obj() const { return reinterpret_cast<const XGL_OBJECT &>(BaseObject::obj()); }
// xglBindObjectMemory()
void bind_memory(uint32_t alloc_idx, const GpuMemory &mem, XGL_GPU_SIZE mem_offset);
void unbind_memory(uint32_t alloc_idx);
void unbind_memory();
// xglBindObjectMemoryRange()
void bind_memory(uint32_t alloc_idx, XGL_GPU_SIZE offset, XGL_GPU_SIZE size,
const GpuMemory &mem, XGL_GPU_SIZE mem_offset);
// Unless an object is initialized with init_no_mem(), memories are
// automatically allocated and bound. These methods can be used to get
// the memories (for xglQueueAddMemReference), or to map/unmap the primary memory.
std::vector<XGL_GPU_MEMORY> memories() const;
const void *map(XGL_FLAGS flags) const;
void *map(XGL_FLAGS flags);
const void *map() const { return map(0); }
void *map() { return map(0); }
void unmap() const;
protected:
explicit Object() : mem_alloc_count_(0), internal_mems_(NULL), primary_mem_(NULL) {}
explicit Object(XGL_OBJECT obj) : mem_alloc_count_(0), internal_mems_(NULL), primary_mem_(NULL) { init(obj); }
~Object() { cleanup(); }
void init(XGL_OBJECT obj, bool own);
void init(XGL_OBJECT obj) { init(obj, true); }
void reinit(XGL_OBJECT obj, bool own);
void reinit(XGL_OBJECT obj) { init(obj, true); }
// allocate and bind internal memories
void alloc_memory(const Device &dev, bool for_linear_img, bool for_img);
void alloc_memory(const Device &dev) { alloc_memory(dev, false, false); }
void alloc_memory(const std::vector<XGL_GPU_MEMORY> &mems);
private:
void cleanup();
uint32_t mem_alloc_count_;
GpuMemory *internal_mems_;
GpuMemory *primary_mem_;
};
class DynamicStateObject : public Object {
public:
const XGL_DYNAMIC_STATE_OBJECT &obj() const { return reinterpret_cast<const XGL_DYNAMIC_STATE_OBJECT &>(Object::obj()); }
protected:
explicit DynamicStateObject() {}
explicit DynamicStateObject(XGL_DYNAMIC_STATE_OBJECT obj) : Object(obj) {}
};
template<typename T, class C>
class DerivedObject : public C {
public:
const T &obj() const { return reinterpret_cast<const T &>(C::obj()); }
protected:
typedef T obj_type;
typedef C base_type;
explicit DerivedObject() {}
explicit DerivedObject(T obj) : C(obj) {}
};
class Device : public DerivedObject<XGL_DEVICE, BaseObject> {
public:
explicit Device(XGL_PHYSICAL_GPU gpu) : gpu_(gpu) {}
~Device();
// xglCreateDevice()
void init(const XGL_DEVICE_CREATE_INFO &info);
void init(bool enable_layers); // all queues, all extensions, etc
void init() { init(false); };
const PhysicalGpu &gpu() const { return gpu_; }
// xglGetDeviceQueue()
const std::vector<Queue *> &graphics_queues() { return queues_[GRAPHICS]; }
const std::vector<Queue *> &compute_queues() { return queues_[COMPUTE]; }
const std::vector<Queue *> &dma_queues() { return queues_[DMA]; }
uint32_t graphics_queue_node_index_;
struct Format {
XGL_FORMAT format;
XGL_IMAGE_TILING tiling;
XGL_FLAGS features;
};
// xglGetFormatInfo()
XGL_FORMAT_PROPERTIES format_properties(XGL_FORMAT format);
const std::vector<Format> &formats() const { return formats_; }
// xglDeviceWaitIdle()
void wait();
// xglWaitForFences()
XGL_RESULT wait(const std::vector<const Fence *> &fences, bool wait_all, uint64_t timeout);
XGL_RESULT wait(const Fence &fence) { return wait(std::vector<const Fence *>(1, &fence), true, (uint64_t) -1); }
// xglBeginDescriptorPoolUpdate()
// xglEndDescriptorPoolUpdate()
void begin_descriptor_pool_update(XGL_DESCRIPTOR_UPDATE_MODE mode);
void end_descriptor_pool_update(CmdBuffer &cmd);
private:
enum QueueIndex {
GRAPHICS,
COMPUTE,
DMA,
QUEUE_COUNT,
};
void init_queues();
void init_formats();
PhysicalGpu gpu_;
std::vector<Queue *> queues_[QUEUE_COUNT];
std::vector<Format> formats_;
};
class Queue : public DerivedObject<XGL_QUEUE, BaseObject> {
public:
explicit Queue(XGL_QUEUE queue) : DerivedObject(queue) {}
// xglQueueSubmit()
void submit(const std::vector<const CmdBuffer *> &cmds, Fence &fence);
void submit(const CmdBuffer &cmd, Fence &fence);
void submit(const CmdBuffer &cmd);
// xglQueueAddMemReference()
// xglQueueRemoveMemReference()
void add_mem_references(const std::vector<XGL_GPU_MEMORY> &mem_refs);
void remove_mem_references(const std::vector<XGL_GPU_MEMORY> &mem_refs);
// xglQueueWaitIdle()
void wait();
// xglQueueSignalSemaphore()
// xglQueueWaitSemaphore()
void signal_semaphore(Semaphore &sem);
void wait_semaphore(Semaphore &sem);
};
class GpuMemory : public DerivedObject<XGL_GPU_MEMORY, BaseObject> {
public:
~GpuMemory();
// xglAllocMemory()
void init(const Device &dev, const XGL_MEMORY_ALLOC_INFO &info);
// xglPinSystemMemory()
void init(const Device &dev, size_t size, const void *data);
// xglOpenSharedMemory()
void init(const Device &dev, const XGL_MEMORY_OPEN_INFO &info);
// xglOpenPeerMemory()
void init(const Device &dev, const XGL_PEER_MEMORY_OPEN_INFO &info);
void init(XGL_GPU_MEMORY mem) { BaseObject::init(mem, false); }
// xglSetMemoryPriority()
void set_priority(XGL_MEMORY_PRIORITY priority);
// xglMapMemory()
const void *map(XGL_FLAGS flags) const;
void *map(XGL_FLAGS flags);
const void *map() const { return map(0); }
void *map() { return map(0); }
// xglUnmapMemory()
void unmap() const;
static XGL_MEMORY_ALLOC_INFO alloc_info(const XGL_MEMORY_REQUIREMENTS &reqs,
const XGL_MEMORY_ALLOC_INFO *next_info);
};
class Fence : public DerivedObject<XGL_FENCE, Object> {
public:
// xglCreateFence()
void init(const Device &dev, const XGL_FENCE_CREATE_INFO &info);
// xglGetFenceStatus()
XGL_RESULT status() const { return xglGetFenceStatus(obj()); }
static XGL_FENCE_CREATE_INFO create_info(XGL_FENCE_CREATE_FLAGS flags);
static XGL_FENCE_CREATE_INFO create_info();
};
class Semaphore : public DerivedObject<XGL_SEMAPHORE, Object> {
public:
// xglCreateSemaphore()
void init(const Device &dev, const XGL_SEMAPHORE_CREATE_INFO &info);
// xglOpenSharedSemaphore()
void init(const Device &dev, const XGL_SEMAPHORE_OPEN_INFO &info);
static XGL_SEMAPHORE_CREATE_INFO create_info(uint32_t init_count, XGL_FLAGS flags);
};
class Event : public DerivedObject<XGL_EVENT, Object> {
public:
// xglCreateEvent()
void init(const Device &dev, const XGL_EVENT_CREATE_INFO &info);
// xglGetEventStatus()
// xglSetEvent()
// xglResetEvent()
XGL_RESULT status() const { return xglGetEventStatus(obj()); }
void set();
void reset();
static XGL_EVENT_CREATE_INFO create_info(XGL_FLAGS flags);
};
class QueryPool : public DerivedObject<XGL_QUERY_POOL, Object> {
public:
// xglCreateQueryPool()
void init(const Device &dev, const XGL_QUERY_POOL_CREATE_INFO &info);
// xglGetQueryPoolResults()
XGL_RESULT results(uint32_t start, uint32_t count, size_t size, void *data);
static XGL_QUERY_POOL_CREATE_INFO create_info(XGL_QUERY_TYPE type, uint32_t slot_count);
};
class Buffer : public DerivedObject<XGL_BUFFER, Object> {
public:
explicit Buffer() {}
explicit Buffer(const Device &dev, const XGL_BUFFER_CREATE_INFO &info) { init(dev, info); }
explicit Buffer(const Device &dev, XGL_GPU_SIZE size) { init(dev, size); }
// xglCreateBuffer()
void init(const Device &dev, const XGL_BUFFER_CREATE_INFO &info);
void init(const Device &dev, XGL_GPU_SIZE size) { init(dev, create_info(size, 0)); }
void init_no_mem(const Device &dev, const XGL_BUFFER_CREATE_INFO &info);
static XGL_BUFFER_CREATE_INFO create_info(XGL_GPU_SIZE size, XGL_FLAGS usage);
XGL_BUFFER_MEMORY_BARRIER buffer_memory_barrier(XGL_FLAGS output_mask, XGL_FLAGS input_mask,
XGL_GPU_SIZE offset, XGL_GPU_SIZE size) const
{
XGL_BUFFER_MEMORY_BARRIER barrier = {};
barrier.sType = XGL_STRUCTURE_TYPE_BUFFER_MEMORY_BARRIER;
barrier.buffer = obj();
barrier.outputMask = output_mask;
barrier.inputMask = input_mask;
barrier.offset = offset;
barrier.size = size;
return barrier;
}
private:
XGL_BUFFER_CREATE_INFO create_info_;
};
class BufferView : public DerivedObject<XGL_BUFFER_VIEW, Object> {
public:
// xglCreateBufferView()
void init(const Device &dev, const XGL_BUFFER_VIEW_CREATE_INFO &info);
};
class Image : public DerivedObject<XGL_IMAGE, Object> {
public:
explicit Image() : format_features_(0) {}
explicit Image(const Device &dev, const XGL_IMAGE_CREATE_INFO &info) : format_features_(0) { init(dev, info); }
// xglCreateImage()
void init(const Device &dev, const XGL_IMAGE_CREATE_INFO &info);
void init_no_mem(const Device &dev, const XGL_IMAGE_CREATE_INFO &info);
// xglOpenPeerImage()
void init(const Device &dev, const XGL_PEER_IMAGE_OPEN_INFO &info, const XGL_IMAGE_CREATE_INFO &original_info);
// xglBindImageMemoryRange()
void bind_memory(uint32_t alloc_idx, const XGL_IMAGE_MEMORY_BIND_INFO &info,
const GpuMemory &mem, XGL_GPU_SIZE mem_offset);
// xglGetImageSubresourceInfo()
XGL_SUBRESOURCE_LAYOUT subresource_layout(const XGL_IMAGE_SUBRESOURCE &subres) const;
bool transparent() const;
bool copyable() const { return (format_features_ & XGL_FORMAT_IMAGE_COPY_BIT); }
XGL_IMAGE_SUBRESOURCE_RANGE subresource_range(XGL_IMAGE_ASPECT aspect) const { return subresource_range(create_info_, aspect); }
XGL_EXTENT3D extent() const { return create_info_.extent; }
XGL_EXTENT3D extent(uint32_t mip_level) const { return extent(create_info_.extent, mip_level); }
XGL_FORMAT format() const {return create_info_.format;}
XGL_IMAGE_MEMORY_BARRIER image_memory_barrier(XGL_FLAGS output_mask, XGL_FLAGS input_mask,
XGL_IMAGE_LAYOUT old_layout,
XGL_IMAGE_LAYOUT new_layout,
const XGL_IMAGE_SUBRESOURCE_RANGE &range) const
{
XGL_IMAGE_MEMORY_BARRIER barrier = {};
barrier.sType = XGL_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER;
barrier.outputMask = output_mask;
barrier.inputMask = input_mask;
barrier.oldLayout = old_layout;
barrier.newLayout = new_layout;
barrier.image = obj();
barrier.subresourceRange = range;
return barrier;
}
static XGL_IMAGE_CREATE_INFO create_info();
static XGL_IMAGE_SUBRESOURCE subresource(XGL_IMAGE_ASPECT aspect, uint32_t mip_level, uint32_t array_slice);
static XGL_IMAGE_SUBRESOURCE subresource(const XGL_IMAGE_SUBRESOURCE_RANGE &range, uint32_t mip_level, uint32_t array_slice);
static XGL_IMAGE_SUBRESOURCE_RANGE subresource_range(XGL_IMAGE_ASPECT aspect, uint32_t base_mip_level, uint32_t mip_levels,
uint32_t base_array_slice, uint32_t array_size);
static XGL_IMAGE_SUBRESOURCE_RANGE subresource_range(const XGL_IMAGE_CREATE_INFO &info, XGL_IMAGE_ASPECT aspect);
static XGL_IMAGE_SUBRESOURCE_RANGE subresource_range(const XGL_IMAGE_SUBRESOURCE &subres);
static XGL_EXTENT2D extent(int32_t width, int32_t height);
static XGL_EXTENT2D extent(const XGL_EXTENT2D &extent, uint32_t mip_level);
static XGL_EXTENT2D extent(const XGL_EXTENT3D &extent);
static XGL_EXTENT3D extent(int32_t width, int32_t height, int32_t depth);
static XGL_EXTENT3D extent(const XGL_EXTENT3D &extent, uint32_t mip_level);
private:
void init_info(const Device &dev, const XGL_IMAGE_CREATE_INFO &info);
XGL_IMAGE_CREATE_INFO create_info_;
XGL_FLAGS format_features_;
};
class ImageView : public DerivedObject<XGL_IMAGE_VIEW, Object> {
public:
// xglCreateImageView()
void init(const Device &dev, const XGL_IMAGE_VIEW_CREATE_INFO &info);
};
class ColorAttachmentView : public DerivedObject<XGL_COLOR_ATTACHMENT_VIEW, Object> {
public:
// xglCreateColorAttachmentView()
void init(const Device &dev, const XGL_COLOR_ATTACHMENT_VIEW_CREATE_INFO &info);
};
class DepthStencilView : public DerivedObject<XGL_DEPTH_STENCIL_VIEW, Object> {
public:
// xglCreateDepthStencilView()
void init(const Device &dev, const XGL_DEPTH_STENCIL_VIEW_CREATE_INFO &info);
};
class Shader : public DerivedObject<XGL_SHADER, Object> {
public:
// xglCreateShader()
void init(const Device &dev, const XGL_SHADER_CREATE_INFO &info);
XGL_RESULT init_try(const Device &dev, const XGL_SHADER_CREATE_INFO &info);
static XGL_SHADER_CREATE_INFO create_info(size_t code_size, const void *code, XGL_FLAGS flags);
};
class Pipeline : public DerivedObject<XGL_PIPELINE, Object> {
public:
// xglCreateGraphicsPipeline()
void init(const Device &dev, const XGL_GRAPHICS_PIPELINE_CREATE_INFO &info);
// xglCreateGraphicsPipelineDerivative()
void init(const Device &dev, const XGL_GRAPHICS_PIPELINE_CREATE_INFO &info, const XGL_PIPELINE basePipeline);
// xglCreateComputePipeline()
void init(const Device &dev, const XGL_COMPUTE_PIPELINE_CREATE_INFO &info);
// xglLoadPipeline()
void init(const Device&dev, size_t size, const void *data);
// xglLoadPipelineDerivative()
void init(const Device&dev, size_t size, const void *data, XGL_PIPELINE basePipeline);
// xglStorePipeline()
size_t store(size_t size, void *data);
};
class Sampler : public DerivedObject<XGL_SAMPLER, Object> {
public:
// xglCreateSampler()
void init(const Device &dev, const XGL_SAMPLER_CREATE_INFO &info);
};
class DescriptorSetLayout : public DerivedObject<XGL_DESCRIPTOR_SET_LAYOUT, Object> {
public:
// xglCreateDescriptorSetLayout()
void init(const Device &dev, const XGL_DESCRIPTOR_SET_LAYOUT_CREATE_INFO &info);
};
class DescriptorSetLayoutChain : public DerivedObject<XGL_DESCRIPTOR_SET_LAYOUT_CHAIN, Object> {
public:
// xglCreateDescriptorSetLayoutChain()
void init(const Device &dev, const std::vector<const DescriptorSetLayout *> &layouts);
};
class DescriptorPool : public DerivedObject<XGL_DESCRIPTOR_POOL, Object> {
public:
// xglCreateDescriptorPool()
void init(const Device &dev, XGL_DESCRIPTOR_POOL_USAGE usage,
uint32_t max_sets, const XGL_DESCRIPTOR_POOL_CREATE_INFO &info);
// xglResetDescriptorPool()
void reset();
// xglAllocDescriptorSets()
std::vector<DescriptorSet *> alloc_sets(XGL_DESCRIPTOR_SET_USAGE usage, const std::vector<const DescriptorSetLayout *> &layouts);
std::vector<DescriptorSet *> alloc_sets(XGL_DESCRIPTOR_SET_USAGE usage, const DescriptorSetLayout &layout, uint32_t count);
DescriptorSet *alloc_sets(XGL_DESCRIPTOR_SET_USAGE usage, const DescriptorSetLayout &layout);
// xglClearDescriptorSets()
void clear_sets(const std::vector<DescriptorSet *> &sets);
void clear_sets(DescriptorSet &set) { clear_sets(std::vector<DescriptorSet *>(1, &set)); }
};
class DescriptorSet : public DerivedObject<XGL_DESCRIPTOR_SET, Object> {
public:
explicit DescriptorSet(XGL_DESCRIPTOR_SET set) : DerivedObject(set) {}
// xglUpdateDescriptors()
void update(const std::vector<const void *> &update_array);
static XGL_UPDATE_SAMPLERS update(uint32_t binding, uint32_t index, uint32_t count, const XGL_SAMPLER *samplers);
static XGL_UPDATE_SAMPLERS update(uint32_t binding, uint32_t index, const std::vector<XGL_SAMPLER> &samplers);
static XGL_UPDATE_SAMPLER_TEXTURES update(uint32_t binding, uint32_t index, uint32_t count, const XGL_SAMPLER_IMAGE_VIEW_INFO *textures);
static XGL_UPDATE_SAMPLER_TEXTURES update(uint32_t binding, uint32_t index, const std::vector<XGL_SAMPLER_IMAGE_VIEW_INFO> &textures);
static XGL_UPDATE_IMAGES update(XGL_DESCRIPTOR_TYPE type, uint32_t binding, uint32_t index, uint32_t count, const XGL_IMAGE_VIEW_ATTACH_INFO *views);
static XGL_UPDATE_IMAGES update(XGL_DESCRIPTOR_TYPE type, uint32_t binding, uint32_t index, const std::vector<XGL_IMAGE_VIEW_ATTACH_INFO> &views);
static XGL_UPDATE_BUFFERS update(XGL_DESCRIPTOR_TYPE type, uint32_t binding, uint32_t index, uint32_t count, const XGL_BUFFER_VIEW_ATTACH_INFO *views);
static XGL_UPDATE_BUFFERS update(XGL_DESCRIPTOR_TYPE type, uint32_t binding, uint32_t index, const std::vector<XGL_BUFFER_VIEW_ATTACH_INFO> &views);
static XGL_UPDATE_AS_COPY update(XGL_DESCRIPTOR_TYPE type, uint32_t binding, uint32_t index, uint32_t count, const DescriptorSet &set);
static XGL_BUFFER_VIEW_ATTACH_INFO attach_info(const BufferView &view);
static XGL_IMAGE_VIEW_ATTACH_INFO attach_info(const ImageView &view, XGL_IMAGE_LAYOUT layout);
};
class DynamicVpStateObject : public DerivedObject<XGL_DYNAMIC_VP_STATE_OBJECT, DynamicStateObject> {
public:
// xglCreateDynamicViewportState()
void init(const Device &dev, const XGL_DYNAMIC_VP_STATE_CREATE_INFO &info);
};
class DynamicRsStateObject : public DerivedObject<XGL_DYNAMIC_RS_STATE_OBJECT, DynamicStateObject> {
public:
// xglCreateDynamicRasterState()
void init(const Device &dev, const XGL_DYNAMIC_RS_STATE_CREATE_INFO &info);
};
class DynamicCbStateObject : public DerivedObject<XGL_DYNAMIC_CB_STATE_OBJECT, DynamicStateObject> {
public:
// xglCreateDynamicColorBlendState()
void init(const Device &dev, const XGL_DYNAMIC_CB_STATE_CREATE_INFO &info);
};
class DynamicDsStateObject : public DerivedObject<XGL_DYNAMIC_DS_STATE_OBJECT, DynamicStateObject> {
public:
// xglCreateDynamicDepthStencilState()
void init(const Device &dev, const XGL_DYNAMIC_DS_STATE_CREATE_INFO &info);
};
class CmdBuffer : public DerivedObject<XGL_CMD_BUFFER, Object> {
public:
explicit CmdBuffer() {}
explicit CmdBuffer(const Device &dev, const XGL_CMD_BUFFER_CREATE_INFO &info) { init(dev, info); }
// xglCreateCommandBuffer()
void init(const Device &dev, const XGL_CMD_BUFFER_CREATE_INFO &info);
// xglBeginCommandBuffer()
void begin(const XGL_CMD_BUFFER_BEGIN_INFO *info);
void begin(XGL_RENDER_PASS renderpass_obj, XGL_FRAMEBUFFER framebuffer_obj);
void begin();
// xglEndCommandBuffer()
// xglResetCommandBuffer()
void end();
void reset();
static XGL_CMD_BUFFER_CREATE_INFO create_info(uint32_t queueNodeIndex);
};
inline const void *Object::map(XGL_FLAGS flags) const
{
return (primary_mem_) ? primary_mem_->map(flags) : NULL;
}
inline void *Object::map(XGL_FLAGS flags)
{
return (primary_mem_) ? primary_mem_->map(flags) : NULL;
}
inline void Object::unmap() const
{
if (primary_mem_)
primary_mem_->unmap();
}
inline XGL_MEMORY_ALLOC_INFO GpuMemory::alloc_info(const XGL_MEMORY_REQUIREMENTS &reqs,
const XGL_MEMORY_ALLOC_INFO *next_info)
{
XGL_MEMORY_ALLOC_INFO info = {};
info.sType = XGL_STRUCTURE_TYPE_MEMORY_ALLOC_INFO;
if (next_info != NULL)
info.pNext = (void *) next_info;
info.allocationSize = reqs.size;
info.memProps = reqs.memProps;
info.memType = reqs.memType;
info.memPriority = XGL_MEMORY_PRIORITY_NORMAL;
return info;
}
inline XGL_BUFFER_CREATE_INFO Buffer::create_info(XGL_GPU_SIZE size, XGL_FLAGS usage)
{
XGL_BUFFER_CREATE_INFO info = {};
info.sType = XGL_STRUCTURE_TYPE_BUFFER_CREATE_INFO;
info.size = size;
info.usage = usage;
return info;
}
inline XGL_FENCE_CREATE_INFO Fence::create_info(XGL_FENCE_CREATE_FLAGS flags)
{
XGL_FENCE_CREATE_INFO info = {};
info.sType = XGL_STRUCTURE_TYPE_FENCE_CREATE_INFO;
info.flags = flags;
return info;
}
inline XGL_FENCE_CREATE_INFO Fence::create_info()
{
XGL_FENCE_CREATE_INFO info = {};
info.sType = XGL_STRUCTURE_TYPE_FENCE_CREATE_INFO;
return info;
}
inline XGL_SEMAPHORE_CREATE_INFO Semaphore::create_info(uint32_t init_count, XGL_FLAGS flags)
{
XGL_SEMAPHORE_CREATE_INFO info = {};
info.sType = XGL_STRUCTURE_TYPE_SEMAPHORE_CREATE_INFO;
info.initialCount = init_count;
info.flags = flags;
return info;
}
inline XGL_EVENT_CREATE_INFO Event::create_info(XGL_FLAGS flags)
{
XGL_EVENT_CREATE_INFO info = {};
info.sType = XGL_STRUCTURE_TYPE_EVENT_CREATE_INFO;
info.flags = flags;
return info;
}
inline XGL_QUERY_POOL_CREATE_INFO QueryPool::create_info(XGL_QUERY_TYPE type, uint32_t slot_count)
{
XGL_QUERY_POOL_CREATE_INFO info = {};
info.sType = XGL_STRUCTURE_TYPE_QUERY_POOL_CREATE_INFO;
info.queryType = type;
info.slots = slot_count;
return info;
}
inline XGL_IMAGE_CREATE_INFO Image::create_info()
{
XGL_IMAGE_CREATE_INFO info = {};
info.sType = XGL_STRUCTURE_TYPE_IMAGE_CREATE_INFO;
info.extent.width = 1;
info.extent.height = 1;
info.extent.depth = 1;
info.mipLevels = 1;
info.arraySize = 1;
info.samples = 1;
return info;
}
inline XGL_IMAGE_SUBRESOURCE Image::subresource(XGL_IMAGE_ASPECT aspect, uint32_t mip_level, uint32_t array_slice)
{
XGL_IMAGE_SUBRESOURCE subres = {};
subres.aspect = aspect;
subres.mipLevel = mip_level;
subres.arraySlice = array_slice;
return subres;
}
inline XGL_IMAGE_SUBRESOURCE Image::subresource(const XGL_IMAGE_SUBRESOURCE_RANGE &range, uint32_t mip_level, uint32_t array_slice)
{
return subresource(range.aspect, range.baseMipLevel + mip_level, range.baseArraySlice + array_slice);
}
inline XGL_IMAGE_SUBRESOURCE_RANGE Image::subresource_range(XGL_IMAGE_ASPECT aspect, uint32_t base_mip_level, uint32_t mip_levels,
uint32_t base_array_slice, uint32_t array_size)
{
XGL_IMAGE_SUBRESOURCE_RANGE range = {};
range.aspect = aspect;
range.baseMipLevel = base_mip_level;
range.mipLevels = mip_levels;
range.baseArraySlice = base_array_slice;
range.arraySize = array_size;
return range;
}
inline XGL_IMAGE_SUBRESOURCE_RANGE Image::subresource_range(const XGL_IMAGE_CREATE_INFO &info, XGL_IMAGE_ASPECT aspect)
{
return subresource_range(aspect, 0, info.mipLevels, 0, info.arraySize);
}
inline XGL_IMAGE_SUBRESOURCE_RANGE Image::subresource_range(const XGL_IMAGE_SUBRESOURCE &subres)
{
return subresource_range(subres.aspect, subres.mipLevel, 1, subres.arraySlice, 1);
}
inline XGL_EXTENT2D Image::extent(int32_t width, int32_t height)
{
XGL_EXTENT2D extent = {};
extent.width = width;
extent.height = height;
return extent;
}
inline XGL_EXTENT2D Image::extent(const XGL_EXTENT2D &extent, uint32_t mip_level)
{
const int32_t width = (extent.width >> mip_level) ? extent.width >> mip_level : 1;
const int32_t height = (extent.height >> mip_level) ? extent.height >> mip_level : 1;
return Image::extent(width, height);
}
inline XGL_EXTENT2D Image::extent(const XGL_EXTENT3D &extent)
{
return Image::extent(extent.width, extent.height);
}
inline XGL_EXTENT3D Image::extent(int32_t width, int32_t height, int32_t depth)
{
XGL_EXTENT3D extent = {};
extent.width = width;
extent.height = height;
extent.depth = depth;
return extent;
}
inline XGL_EXTENT3D Image::extent(const XGL_EXTENT3D &extent, uint32_t mip_level)
{
const int32_t width = (extent.width >> mip_level) ? extent.width >> mip_level : 1;
const int32_t height = (extent.height >> mip_level) ? extent.height >> mip_level : 1;
const int32_t depth = (extent.depth >> mip_level) ? extent.depth >> mip_level : 1;
return Image::extent(width, height, depth);
}
inline XGL_SHADER_CREATE_INFO Shader::create_info(size_t code_size, const void *code, XGL_FLAGS flags)
{
XGL_SHADER_CREATE_INFO info = {};
info.sType = XGL_STRUCTURE_TYPE_SHADER_CREATE_INFO;
info.codeSize = code_size;
info.pCode = code;
info.flags = flags;
return info;
}
inline XGL_BUFFER_VIEW_ATTACH_INFO DescriptorSet::attach_info(const BufferView &view)
{
XGL_BUFFER_VIEW_ATTACH_INFO info = {};
info.sType = XGL_STRUCTURE_TYPE_BUFFER_VIEW_ATTACH_INFO;
info.view = view.obj();
return info;
}
inline XGL_IMAGE_VIEW_ATTACH_INFO DescriptorSet::attach_info(const ImageView &view, XGL_IMAGE_LAYOUT layout)
{
XGL_IMAGE_VIEW_ATTACH_INFO info = {};
info.sType = XGL_STRUCTURE_TYPE_IMAGE_VIEW_ATTACH_INFO;
info.view = view.obj();
info.layout = layout;
return info;
}
inline XGL_UPDATE_SAMPLERS DescriptorSet::update(uint32_t binding, uint32_t index, uint32_t count, const XGL_SAMPLER *samplers)
{
XGL_UPDATE_SAMPLERS info = {};
info.sType = XGL_STRUCTURE_TYPE_UPDATE_SAMPLERS;
info.binding = binding;
info.arrayIndex = index;
info.count = count;
info.pSamplers = samplers;
return info;
}
inline XGL_UPDATE_SAMPLERS DescriptorSet::update(uint32_t binding, uint32_t index, const std::vector<XGL_SAMPLER> &samplers)
{
return update(binding, index, samplers.size(), &samplers[0]);
}
inline XGL_UPDATE_SAMPLER_TEXTURES DescriptorSet::update(uint32_t binding, uint32_t index, uint32_t count, const XGL_SAMPLER_IMAGE_VIEW_INFO *textures)
{
XGL_UPDATE_SAMPLER_TEXTURES info = {};
info.sType = XGL_STRUCTURE_TYPE_UPDATE_SAMPLER_TEXTURES;
info.binding = binding;
info.arrayIndex = index;
info.count = count;
info.pSamplerImageViews = textures;
return info;
}
inline XGL_UPDATE_SAMPLER_TEXTURES DescriptorSet::update(uint32_t binding, uint32_t index, const std::vector<XGL_SAMPLER_IMAGE_VIEW_INFO> &textures)
{
return update(binding, index, textures.size(), &textures[0]);
}
inline XGL_UPDATE_IMAGES DescriptorSet::update(XGL_DESCRIPTOR_TYPE type, uint32_t binding, uint32_t index, uint32_t count,
const XGL_IMAGE_VIEW_ATTACH_INFO *views)
{
XGL_UPDATE_IMAGES info = {};
info.sType = XGL_STRUCTURE_TYPE_UPDATE_IMAGES;
info.descriptorType = type;
info.binding = binding;
info.arrayIndex = index;
info.count = count;
info.pImageViews = views;
return info;
}
inline XGL_UPDATE_IMAGES DescriptorSet::update(XGL_DESCRIPTOR_TYPE type, uint32_t binding, uint32_t index,
const std::vector<XGL_IMAGE_VIEW_ATTACH_INFO> &views)
{
return update(type, binding, index, views.size(), &views[0]);
}
inline XGL_UPDATE_BUFFERS DescriptorSet::update(XGL_DESCRIPTOR_TYPE type, uint32_t binding, uint32_t index, uint32_t count,
const XGL_BUFFER_VIEW_ATTACH_INFO *views)
{
XGL_UPDATE_BUFFERS info = {};
info.sType = XGL_STRUCTURE_TYPE_UPDATE_BUFFERS;
info.descriptorType = type;
info.binding = binding;
info.arrayIndex = index;
info.count = count;
info.pBufferViews = views;
return info;
}
inline XGL_UPDATE_BUFFERS DescriptorSet::update(XGL_DESCRIPTOR_TYPE type, uint32_t binding, uint32_t index,
const std::vector<XGL_BUFFER_VIEW_ATTACH_INFO> &views)
{
return update(type, binding, index, views.size(), &views[0]);
}
inline XGL_UPDATE_AS_COPY DescriptorSet::update(XGL_DESCRIPTOR_TYPE type, uint32_t binding, uint32_t index, uint32_t count, const DescriptorSet &set)
{
XGL_UPDATE_AS_COPY info = {};
info.sType = XGL_STRUCTURE_TYPE_UPDATE_AS_COPY;
info.descriptorType = type;
info.binding = binding;
info.arrayElement = index;
info.count = count;
info.descriptorSet = set.obj();
return info;
}
inline XGL_CMD_BUFFER_CREATE_INFO CmdBuffer::create_info(uint32_t queueNodeIndex)
{
XGL_CMD_BUFFER_CREATE_INFO info = {};
info.sType = XGL_STRUCTURE_TYPE_CMD_BUFFER_CREATE_INFO;
info.queueNodeIndex = queueNodeIndex;
return info;
}
}; // namespace xgl_testing
#endif // XGLTESTBINDING_H

View File

@ -1,4 +1,4 @@
// XGL tests
// VK tests
//
// Copyright (C) 2014 LunarG, Inc.
//
@ -20,8 +20,8 @@
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
#include "xgltestframework.h"
#include "xglrenderframework.h"
#include "vktestframework.h"
#include "vkrenderframework.h"
#include "GL/freeglut_std.h"
//#include "ShaderLang.h"
#include "GlslangToSpv.h"
@ -83,7 +83,7 @@ void TestEnvironment::SetUp()
// Initialize GLSL to SPV compiler utility
glslang::InitializeProcess();
xgl_testing::set_error_callback(test_error_callback);
vk_testing::set_error_callback(test_error_callback);
}
void TestEnvironment::TearDown()
@ -180,26 +180,26 @@ void XglTestFramework::InitArgs(int *argc, char *argv[])
void XglTestFramework::WritePPM( const char *basename, XglImage *image )
{
string filename;
XGL_RESULT err;
VK_RESULT err;
int x, y;
XglImage displayImage(image->device());
displayImage.init(image->extent().width, image->extent().height, image->format(), 0, XGL_LINEAR_TILING);
displayImage.init(image->extent().width, image->extent().height, image->format(), 0, VK_LINEAR_TILING);
displayImage.CopyImage(*image);
filename.append(basename);
filename.append(".ppm");
const XGL_IMAGE_SUBRESOURCE sr = {
XGL_IMAGE_ASPECT_COLOR, 0, 0
const VK_IMAGE_SUBRESOURCE sr = {
VK_IMAGE_ASPECT_COLOR, 0, 0
};
XGL_SUBRESOURCE_LAYOUT sr_layout;
VK_SUBRESOURCE_LAYOUT sr_layout;
size_t data_size = sizeof(sr_layout);
err = xglGetImageSubresourceInfo( image->image(), &sr,
XGL_INFO_TYPE_SUBRESOURCE_LAYOUT,
err = vkGetImageSubresourceInfo( image->image(), &sr,
VK_INFO_TYPE_SUBRESOURCE_LAYOUT,
&data_size, &sr_layout);
ASSERT_XGL_SUCCESS( err );
ASSERT_VK_SUCCESS( err );
ASSERT_EQ(data_size, sizeof(sr_layout));
char *ptr;
@ -218,7 +218,7 @@ void XglTestFramework::WritePPM( const char *basename, XglImage *image )
const int *row = (const int *) ptr;
int swapped;
if (displayImage.format() == XGL_FMT_B8G8R8A8_UNORM)
if (displayImage.format() == VK_FMT_B8G8R8A8_UNORM)
{
for (x = 0; x < displayImage.width(); x++) {
swapped = (*row & 0xff00ff00) | (*row & 0x000000ff) << 16 | (*row & 0x00ff0000) >> 16;
@ -226,7 +226,7 @@ void XglTestFramework::WritePPM( const char *basename, XglImage *image )
row++;
}
}
else if (displayImage.format() == XGL_FMT_R8G8B8A8_UNORM)
else if (displayImage.format() == VK_FMT_R8G8B8A8_UNORM)
{
for (x = 0; x < displayImage.width(); x++) {
file.write((char *) row, 3);
@ -300,26 +300,26 @@ void XglTestFramework::Compare(const char *basename, XglImage *image )
void XglTestFramework::Show(const char *comment, XglImage *image)
{
XGL_RESULT err;
VK_RESULT err;
const XGL_IMAGE_SUBRESOURCE sr = {
XGL_IMAGE_ASPECT_COLOR, 0, 0
const VK_IMAGE_SUBRESOURCE sr = {
VK_IMAGE_ASPECT_COLOR, 0, 0
};
XGL_SUBRESOURCE_LAYOUT sr_layout;
VK_SUBRESOURCE_LAYOUT sr_layout;
size_t data_size = sizeof(sr_layout);
XglTestImageRecord record;
if (!m_show_images) return;
err = xglGetImageSubresourceInfo( image->image(), &sr, XGL_INFO_TYPE_SUBRESOURCE_LAYOUT,
err = vkGetImageSubresourceInfo( image->image(), &sr, VK_INFO_TYPE_SUBRESOURCE_LAYOUT,
&data_size, &sr_layout);
ASSERT_XGL_SUCCESS( err );
ASSERT_VK_SUCCESS( err );
ASSERT_EQ(data_size, sizeof(sr_layout));
char *ptr;
err = image->MapMemory( (void **) &ptr );
ASSERT_XGL_SUCCESS( err );
ASSERT_VK_SUCCESS( err );
ptr += sr_layout.offset;
@ -334,7 +334,7 @@ void XglTestFramework::Show(const char *comment, XglImage *image)
m_display_image = --m_images.end();
err = image->UnmapMemory();
ASSERT_XGL_SUCCESS( err );
ASSERT_VK_SUCCESS( err );
}
@ -380,12 +380,12 @@ void XglTestFramework::RecordImage(XglImage * image)
}
}
static xgl_testing::Environment *environment;
static vk_testing::Environment *environment;
TestFrameworkXglPresent::TestFrameworkXglPresent() :
m_device(environment->default_device()),
m_queue(*m_device.graphics_queues()[0]),
m_cmdbuf(m_device, xgl_testing::CmdBuffer::create_info(m_device.graphics_queue_node_index_))
m_cmdbuf(m_device, vk_testing::CmdBuffer::create_info(m_device.graphics_queue_node_index_))
{
m_quit = false;
m_pause = false;
@ -395,9 +395,9 @@ TestFrameworkXglPresent::TestFrameworkXglPresent() :
void TestFrameworkXglPresent::Display()
{
XGL_RESULT err;
VK_RESULT err;
XGL_WSI_X11_PRESENT_INFO present = {};
VK_WSI_X11_PRESENT_INFO present = {};
present.destWindow = m_window;
present.srcImage = m_display_image->m_presentableImage;
@ -410,7 +410,7 @@ void TestFrameworkXglPresent::Display()
m_display_image->m_title.size(),
m_display_image->m_title.c_str());
err = xglWsiX11QueuePresent(m_queue.obj(), &present, NULL);
err = vkWsiX11QueuePresent(m_queue.obj(), &present, NULL);
assert(!err);
m_queue.wait();
@ -485,55 +485,55 @@ void TestFrameworkXglPresent::Run()
void TestFrameworkXglPresent::CreatePresentableImages()
{
XGL_RESULT err;
VK_RESULT err;
m_display_image = m_images.begin();
for (int x=0; x < m_images.size(); x++)
{
XGL_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO presentable_image_info = {};
presentable_image_info.format = XGL_FMT_B8G8R8A8_UNORM;
presentable_image_info.usage = XGL_IMAGE_USAGE_COLOR_ATTACHMENT_BIT;
VK_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO presentable_image_info = {};
presentable_image_info.format = VK_FMT_B8G8R8A8_UNORM;
presentable_image_info.usage = VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT;
presentable_image_info.extent.width = m_display_image->m_width;
presentable_image_info.extent.height = m_display_image->m_height;
presentable_image_info.flags = 0;
void *dest_ptr;
err = xglWsiX11CreatePresentableImage(m_device.obj(), &presentable_image_info,
err = vkWsiX11CreatePresentableImage(m_device.obj(), &presentable_image_info,
&m_display_image->m_presentableImage, &m_display_image->m_presentableMemory);
assert(!err);
xgl_testing::Buffer buf;
buf.init(m_device, (XGL_GPU_SIZE) m_display_image->m_data_size);
vk_testing::Buffer buf;
buf.init(m_device, (VK_GPU_SIZE) m_display_image->m_data_size);
dest_ptr = buf.map();
memcpy(dest_ptr,m_display_image->m_data, m_display_image->m_data_size);
buf.unmap();
m_cmdbuf.begin();
XGL_BUFFER_IMAGE_COPY region = {};
VK_BUFFER_IMAGE_COPY region = {};
region.imageExtent.height = m_display_image->m_height;
region.imageExtent.width = m_display_image->m_width;
region.imageExtent.depth = 1;
xglCmdCopyBufferToImage(m_cmdbuf.obj(),
vkCmdCopyBufferToImage(m_cmdbuf.obj(),
buf.obj(),
m_display_image->m_presentableImage, XGL_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL,
m_display_image->m_presentableImage, VK_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL,
1, &region);
m_cmdbuf.end();
xglQueueAddMemReference(m_queue.obj(), m_display_image->m_presentableMemory);
xglQueueAddMemReference(m_queue.obj(), buf.memories()[0]);
vkQueueAddMemReference(m_queue.obj(), m_display_image->m_presentableMemory);
vkQueueAddMemReference(m_queue.obj(), buf.memories()[0]);
XGL_CMD_BUFFER cmdBufs[1];
VK_CMD_BUFFER cmdBufs[1];
cmdBufs[0] = m_cmdbuf.obj();
xglQueueSubmit(m_queue.obj(), 1, cmdBufs, NULL);
vkQueueSubmit(m_queue.obj(), 1, cmdBufs, NULL);
m_queue.wait();
xglQueueRemoveMemReference(m_queue.obj(), m_display_image->m_presentableMemory);
xglQueueRemoveMemReference(m_queue.obj(), buf.memories()[0]);
vkQueueRemoveMemReference(m_queue.obj(), m_display_image->m_presentableMemory);
vkQueueRemoveMemReference(m_queue.obj(), buf.memories()[0]);
if (m_display_image->m_width > m_width)
m_width = m_display_image->m_width;
@ -594,7 +594,7 @@ void TestFrameworkXglPresent::TearDown()
{
std::list<XglTestImageRecord>::const_iterator iterator;
for (iterator = m_images.begin(); iterator != m_images.end(); ++iterator) {
xglDestroyObject(iterator->m_presentableImage);
vkDestroyObject(iterator->m_presentableImage);
}
xcb_destroy_window(environment->m_connection, m_window);
}
@ -603,18 +603,18 @@ void XglTestFramework::Finish()
{
if (m_images.size() == 0) return;
environment = new xgl_testing::Environment();
environment = new vk_testing::Environment();
::testing::AddGlobalTestEnvironment(environment);
environment->X11SetUp();
{
TestFrameworkXglPresent xglPresent;
TestFrameworkXglPresent vkPresent;
xglPresent.InitPresentFramework(m_images);
xglPresent.CreatePresentableImages();
xglPresent.CreateMyWindow();
xglPresent.Run();
xglPresent.TearDown();
vkPresent.InitPresentFramework(m_images);
vkPresent.CreatePresentableImages();
vkPresent.CreateMyWindow();
vkPresent.Run();
vkPresent.TearDown();
}
environment->TearDown();
}
@ -1079,27 +1079,27 @@ EShLanguage XglTestFramework::FindLanguage(const std::string& name)
}
//
// Convert XGL shader type to compiler's
// Convert VK shader type to compiler's
//
EShLanguage XglTestFramework::FindLanguage(const XGL_PIPELINE_SHADER_STAGE shader_type)
EShLanguage XglTestFramework::FindLanguage(const VK_PIPELINE_SHADER_STAGE shader_type)
{
switch (shader_type) {
case XGL_SHADER_STAGE_VERTEX:
case VK_SHADER_STAGE_VERTEX:
return EShLangVertex;
case XGL_SHADER_STAGE_TESS_CONTROL:
case VK_SHADER_STAGE_TESS_CONTROL:
return EShLangTessControl;
case XGL_SHADER_STAGE_TESS_EVALUATION:
case VK_SHADER_STAGE_TESS_EVALUATION:
return EShLangTessEvaluation;
case XGL_SHADER_STAGE_GEOMETRY:
case VK_SHADER_STAGE_GEOMETRY:
return EShLangGeometry;
case XGL_SHADER_STAGE_FRAGMENT:
case VK_SHADER_STAGE_FRAGMENT:
return EShLangFragment;
case XGL_SHADER_STAGE_COMPUTE:
case VK_SHADER_STAGE_COMPUTE:
return EShLangCompute;
default:
@ -1109,10 +1109,10 @@ EShLanguage XglTestFramework::FindLanguage(const XGL_PIPELINE_SHADER_STAGE shade
//
// Compile a given string containing GLSL into SPV for use by XGL
// Compile a given string containing GLSL into SPV for use by VK
// Return value of false means an error was encountered.
//
bool XglTestFramework::GLSLtoSPV(const XGL_PIPELINE_SHADER_STAGE shader_type,
bool XglTestFramework::GLSLtoSPV(const VK_PIPELINE_SHADER_STAGE shader_type,
const char *pshader,
std::vector<unsigned int> &spv)
{

View File

@ -1,161 +0,0 @@
// XGL tests
//
// Copyright (C) 2014 LunarG, Inc.
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included
// in all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
// THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
#ifndef XGLTESTFRAMEWORK_H
#define XGLTESTFRAMEWORK_H
#include "gtest-1.7.0/include/gtest/gtest.h"
#include "ShaderLang.h"
#include "GLSL450Lib.h"
#include "icd-spv.h"
#include "test_common.h"
#include "xgltestbinding.h"
#include "test_environment.h"
#include <stdlib.h>
#include <stdio.h>
#include <stdbool.h>
#include <string.h>
#include <iostream>
#include <fstream>
#include <list>
#include <xglWsiX11Ext.h>
// Can be used by tests to record additional details / description of test
#define TEST_DESCRIPTION(desc) RecordProperty("description", desc)
using namespace std;
class XglImage;
class XglTestImageRecord
{
public:
XglTestImageRecord();
XglTestImageRecord(const XglTestImageRecord &);
~XglTestImageRecord();
XglTestImageRecord &operator=(const XglTestImageRecord &rhs);
int operator==(const XglTestImageRecord &rhs) const;
int operator<(const XglTestImageRecord &rhs) const;
string m_title;
int m_width;
int m_height;
void *m_data;
XGL_IMAGE m_presentableImage;
XGL_GPU_MEMORY m_presentableMemory;
unsigned m_data_size;
};
class XglTestFramework : public ::testing::Test
{
public:
XglTestFramework();
~XglTestFramework();
static void InitArgs(int *argc, char *argv[]);
static void Finish();
void WritePPM( const char *basename, XglImage *image );
void Show(const char *comment, XglImage *image);
void Compare(const char *comment, XglImage *image);
void RecordImage(XglImage * image);
void RecordImages(vector<XglImage *> image);
bool GLSLtoSPV(const XGL_PIPELINE_SHADER_STAGE shader_type,
const char *pshader,
std::vector<unsigned int> &spv);
static bool m_use_spv;
char** ReadFileData(const char* fileName);
void FreeFileData(char** data);
private:
int m_compile_options;
int m_num_shader_strings;
TBuiltInResource Resources;
void SetMessageOptions(EShMessages& messages);
void ProcessConfigFile();
EShLanguage FindLanguage(const std::string& name);
EShLanguage FindLanguage(const XGL_PIPELINE_SHADER_STAGE shader_type);
std::string ConfigFile;
bool SetConfigFile(const std::string& name);
static bool m_show_images;
static bool m_save_images;
static bool m_compare_images;
static std::list<XglTestImageRecord> m_images;
static std::list<XglTestImageRecord>::iterator m_display_image;
static int m_display_image_idx;
static int m_width; // Window width
static int m_height; // Window height
int m_frameNum;
string m_testName;
};
class TestFrameworkXglPresent
{
public:
TestFrameworkXglPresent();
void Run();
void InitPresentFramework(std::list<XglTestImageRecord> &imagesIn);
void CreateMyWindow();
void CreatePresentableImages();
void TearDown();
protected:
xgl_testing::Device &m_device;
xgl_testing::Queue &m_queue;
xgl_testing::CmdBuffer m_cmdbuf;
private:
xcb_window_t m_window;
xcb_intern_atom_reply_t *m_atom_wm_delete_window;
std::list<XglTestImageRecord> m_images;
bool m_quit;
bool m_pause;
uint32_t m_width;
uint32_t m_height;
std::list<XglTestImageRecord>::iterator m_display_image;
void Display();
void HandleEvent(xcb_generic_event_t *event);
};
class TestEnvironment : public ::testing::Environment
{
public:
void SetUp();
void TearDown();
};
#endif // XGLTESTFRAMEWORK_H

View File

@ -1,6 +1,6 @@
#!/usr/bin/env python3
#
# XGL
# VK
#
# Copyright (C) 2014 LunarG, Inc.
#
@ -27,7 +27,7 @@ import sys
# code_gen.py overview
# This script generates code based on input headers
# Initially it's intended to support Mantle and XGL headers and
# Initially it's intended to support Mantle and VK headers and
# generate wrappers functions that can be used to display
# structs in a human-readable txt format, as well as utility functions
# to print enum values as strings
@ -161,8 +161,8 @@ class HeaderFileParser:
self.typedef_fwd_dict[base_type] = targ_type.strip(';')
self.typedef_rev_dict[targ_type.strip(';')] = base_type
elif parse_enum:
#if 'XGL_MAX_ENUM' not in line and '{' not in line:
if True not in [ens in line for ens in ['{', 'XGL_MAX_ENUM', '_RANGE']]:
#if 'VK_MAX_ENUM' not in line and '{' not in line:
if True not in [ens in line for ens in ['{', 'VK_MAX_ENUM', '_RANGE']]:
self._add_enum(line, base_type, default_enum_val)
default_enum_val += 1
elif parse_struct:
@ -342,21 +342,25 @@ class StructWrapperGen:
self.struct_dict = in_struct_dict
self.include_headers = []
self.api = prefix
self.header_filename = os.path.join(out_dir, self.api+"_struct_wrappers.h")
self.class_filename = os.path.join(out_dir, self.api+"_struct_wrappers.cpp")
self.string_helper_filename = os.path.join(out_dir, self.api+"_struct_string_helper.h")
self.string_helper_no_addr_filename = os.path.join(out_dir, self.api+"_struct_string_helper_no_addr.h")
self.string_helper_cpp_filename = os.path.join(out_dir, self.api+"_struct_string_helper_cpp.h")
self.string_helper_no_addr_cpp_filename = os.path.join(out_dir, self.api+"_struct_string_helper_no_addr_cpp.h")
self.validate_helper_filename = os.path.join(out_dir, self.api+"_struct_validate_helper.h")
if prefix == "vulkan":
self.api_prefix = "vk"
else:
self.api_prefix = prefix
self.header_filename = os.path.join(out_dir, self.api_prefix+"_struct_wrappers.h")
self.class_filename = os.path.join(out_dir, self.api_prefix+"_struct_wrappers.cpp")
self.string_helper_filename = os.path.join(out_dir, self.api_prefix+"_struct_string_helper.h")
self.string_helper_no_addr_filename = os.path.join(out_dir, self.api_prefix+"_struct_string_helper_no_addr.h")
self.string_helper_cpp_filename = os.path.join(out_dir, self.api_prefix+"_struct_string_helper_cpp.h")
self.string_helper_no_addr_cpp_filename = os.path.join(out_dir, self.api_prefix+"_struct_string_helper_no_addr_cpp.h")
self.validate_helper_filename = os.path.join(out_dir, self.api_prefix+"_struct_validate_helper.h")
self.no_addr = False
self.hfg = CommonFileGen(self.header_filename)
self.cfg = CommonFileGen(self.class_filename)
self.shg = CommonFileGen(self.string_helper_filename)
self.shcppg = CommonFileGen(self.string_helper_cpp_filename)
self.vhg = CommonFileGen(self.validate_helper_filename)
self.size_helper_filename = os.path.join(out_dir, self.api+"_struct_size_helper.h")
self.size_helper_c_filename = os.path.join(out_dir, self.api+"_struct_size_helper.c")
self.size_helper_filename = os.path.join(out_dir, self.api_prefix+"_struct_size_helper.h")
self.size_helper_c_filename = os.path.join(out_dir, self.api_prefix+"_struct_size_helper.c")
self.size_helper_gen = CommonFileGen(self.size_helper_filename)
self.size_helper_c_gen = CommonFileGen(self.size_helper_c_filename)
#print(self.header_filename)
@ -470,12 +474,12 @@ class StructWrapperGen:
def _generateCppHeader(self):
header = []
header.append("//#includes, #defines, globals and such...\n")
header.append("#include <stdio.h>\n#include <%s>\n#include <%s_enum_string_helper.h>\n" % (os.path.basename(self.header_filename), self.api))
header.append("#include <stdio.h>\n#include <%s>\n#include <%s_enum_string_helper.h>\n" % (os.path.basename(self.header_filename), self.api_prefix))
return "".join(header)
def _generateClassDefinition(self):
class_def = []
if 'xgl' == self.api: # Mantle doesn't have pNext to worry about
if 'vk' == self.api: # Mantle doesn't have pNext to worry about
class_def.append(self._generateDynamicPrintFunctions())
for s in sorted(self.struct_dict):
class_def.append("\n// %s class definition" % self.get_class_name(s))
@ -498,7 +502,7 @@ class StructWrapperGen:
def _generateDynamicPrintFunctions(self):
dp_funcs = []
dp_funcs.append("\nvoid dynamic_display_full_txt(const void* pStruct, uint32_t indent)\n{\n // Cast to APP_INFO ptr initially just to pull sType off struct")
dp_funcs.append(" XGL_STRUCTURE_TYPE sType = ((XGL_APPLICATION_INFO*)pStruct)->sType;\n")
dp_funcs.append(" VK_STRUCTURE_TYPE sType = ((VK_APPLICATION_INFO*)pStruct)->sType;\n")
dp_funcs.append(" switch (sType)\n {")
for e in enum_type_dict:
class_num = 0
@ -519,7 +523,7 @@ class StructWrapperGen:
return "\n".join(dp_funcs)
def _get_func_name(self, struct, mid_str):
return "%s_%s_%s" % (self.api, mid_str, struct.lower().strip("_"))
return "%s_%s_%s" % (self.api_prefix, mid_str, struct.lower().strip("_"))
def _get_sh_func_name(self, struct):
return self._get_func_name(struct, 'print')
@ -695,7 +699,7 @@ class StructWrapperGen:
sh_funcs.append(" if (pStruct == NULL) {")
sh_funcs.append(" return NULL;")
sh_funcs.append(" }")
sh_funcs.append(" XGL_STRUCTURE_TYPE sType = ((XGL_APPLICATION_INFO*)pStruct)->sType;")
sh_funcs.append(" VK_STRUCTURE_TYPE sType = ((VK_APPLICATION_INFO*)pStruct)->sType;")
sh_funcs.append(' char indent[100];\n strcpy(indent, " ");\n strcat(indent, prefix);')
sh_funcs.append(" switch (sType)\n {")
for e in enum_type_dict:
@ -852,7 +856,7 @@ class StructWrapperGen:
sh_funcs.append(" if (pStruct == NULL) {\n")
sh_funcs.append(" return NULL;")
sh_funcs.append(" }\n")
sh_funcs.append(" XGL_STRUCTURE_TYPE sType = ((XGL_APPLICATION_INFO*)pStruct)->sType;")
sh_funcs.append(" VK_STRUCTURE_TYPE sType = ((VK_APPLICATION_INFO*)pStruct)->sType;")
sh_funcs.append(' string indent = " ";')
sh_funcs.append(' indent += prefix;')
sh_funcs.append(" switch (sType)\n {")
@ -967,9 +971,9 @@ class StructWrapperGen:
header = []
header.append("//#includes, #defines, globals and such...\n")
for f in self.include_headers:
if 'xgl_enum_string_helper' not in f:
if 'vk_enum_string_helper' not in f:
header.append("#include <%s>\n" % f)
header.append('#include "xgl_enum_string_helper.h"\n\n// Function Prototypes\n')
header.append('#include "vk_enum_string_helper.h"\n\n// Function Prototypes\n')
header.append("char* dynamic_display(const void* pStruct, const char* prefix);\n")
return "".join(header)
@ -977,9 +981,9 @@ class StructWrapperGen:
header = []
header.append("//#includes, #defines, globals and such...\n")
for f in self.include_headers:
if 'xgl_enum_string_helper' not in f:
if 'vk_enum_string_helper' not in f:
header.append("#include <%s>\n" % f)
header.append('#include "xgl_enum_string_helper.h"\n')
header.append('#include "vk_enum_string_helper.h"\n')
header.append('using namespace std;\n\n// Function Prototypes\n')
header.append("string dynamic_display(const void* pStruct, const string prefix);\n")
return "".join(header)
@ -993,7 +997,7 @@ class StructWrapperGen:
for s in sorted(self.struct_dict):
sh_funcs.append('uint32_t %s(const %s* pStruct)\n{' % (self._get_vh_func_name(s), typedef_fwd_dict[s]))
for m in sorted(self.struct_dict[s]):
# TODO : Need to handle arrays of enums like in XGL_RENDER_PASS_CREATE_INFO struct
# TODO : Need to handle arrays of enums like in VK_RENDER_PASS_CREATE_INFO struct
if is_type(self.struct_dict[s][m]['type'], 'enum') and not self.struct_dict[s][m]['ptr']:
sh_funcs.append(' if (!validate_%s(pStruct->%s))\n return 0;' % (self.struct_dict[s][m]['type'], self.struct_dict[s][m]['name']))
# TODO : Need a little refinement to this code to make sure type of struct matches expected input (ptr, const...)
@ -1010,9 +1014,9 @@ class StructWrapperGen:
header = []
header.append("//#includes, #defines, globals and such...\n")
for f in self.include_headers:
if 'xgl_enum_validate_helper' not in f:
if 'vk_enum_validate_helper' not in f:
header.append("#include <%s>\n" % f)
header.append('#include "xgl_enum_validate_helper.h"\n\n// Function Prototypes\n')
header.append('#include "vk_enum_validate_helper.h"\n\n// Function Prototypes\n')
#header.append("char* dynamic_display(const void* pStruct, const char* prefix);\n")
return "".join(header)
@ -1044,7 +1048,7 @@ class StructWrapperGen:
if not is_type(self.struct_dict[s][m]['type'], 'struct') and not 'char' in self.struct_dict[s][m]['type'].lower():
if 'ppMemBarriers' == self.struct_dict[s][m]['name']:
# TODO : For now be conservative and consider all memBarrier ptrs as largest possible struct
sh_funcs.append('%sstructSize += pStruct->%s*(sizeof(%s*) + sizeof(XGL_IMAGE_MEMORY_BARRIER));' % (indent, self.struct_dict[s][m]['array_size'], self.struct_dict[s][m]['type']))
sh_funcs.append('%sstructSize += pStruct->%s*(sizeof(%s*) + sizeof(VK_IMAGE_MEMORY_BARRIER));' % (indent, self.struct_dict[s][m]['array_size'], self.struct_dict[s][m]['type']))
else:
sh_funcs.append('%sstructSize += pStruct->%s*(sizeof(%s*) + sizeof(%s));' % (indent, self.struct_dict[s][m]['array_size'], self.struct_dict[s][m]['type'], self.struct_dict[s][m]['type']))
else: # This is an array of char* or array of struct ptrs
@ -1091,8 +1095,8 @@ class StructWrapperGen:
else:
sh_funcs.append('size_t get_dynamic_struct_size(const void* pStruct)\n{')
indent = ' '
sh_funcs.append('%s// Just use XGL_APPLICATION_INFO as struct until actual type is resolved' % (indent))
sh_funcs.append('%sXGL_APPLICATION_INFO* pNext = (XGL_APPLICATION_INFO*)pStruct;' % (indent))
sh_funcs.append('%s// Just use VK_APPLICATION_INFO as struct until actual type is resolved' % (indent))
sh_funcs.append('%sVK_APPLICATION_INFO* pNext = (VK_APPLICATION_INFO*)pStruct;' % (indent))
sh_funcs.append('%ssize_t structSize = 0;' % (indent))
if follow_chain:
sh_funcs.append('%swhile (pNext) {' % (indent))
@ -1118,7 +1122,7 @@ class StructWrapperGen:
indent = indent[:-4]
sh_funcs.append('%s}' % (indent))
if follow_chain:
sh_funcs.append('%spNext = (XGL_APPLICATION_INFO*)pNext->pNext;' % (indent))
sh_funcs.append('%spNext = (VK_APPLICATION_INFO*)pNext->pNext;' % (indent))
indent = indent[:-4]
sh_funcs.append('%s}' % (indent))
sh_funcs.append('%sreturn structSize;\n}' % indent)
@ -1282,7 +1286,11 @@ class GraphVizGen:
def __init__(self, struct_dict, prefix, out_dir):
self.struct_dict = struct_dict
self.api = prefix
self.out_file = os.path.join(out_dir, self.api+"_struct_graphviz_helper.h")
if prefix == "vulkan":
self.api_prefix = "vk"
else:
self.api_prefix = prefix
self.out_file = os.path.join(out_dir, self.api_prefix+"_struct_graphviz_helper.h")
self.gvg = CommonFileGen(self.out_file)
def generate(self):
@ -1299,14 +1307,14 @@ class GraphVizGen:
header = []
header.append("//#includes, #defines, globals and such...\n")
for f in self.include_headers:
if 'xgl_enum_string_helper' not in f:
if 'vk_enum_string_helper' not in f:
header.append("#include <%s>\n" % f)
#header.append('#include "xgl_enum_string_helper.h"\n\n// Function Prototypes\n')
#header.append('#include "vk_enum_string_helper.h"\n\n// Function Prototypes\n')
header.append("\nchar* dynamic_gv_display(const void* pStruct, const char* prefix);\n")
return "".join(header)
def _get_gv_func_name(self, struct):
return "%s_gv_print_%s" % (self.api, struct.lower().strip("_"))
return "%s_gv_print_%s" % (self.api_prefix, struct.lower().strip("_"))
# Return elements to create formatted string for given struct member
def _get_struct_gv_print_formatted(self, struct_member, pre_var_name="", postfix = "\\n", struct_var_name="pStruct", struct_ptr=True, print_array=False, port_label=""):
@ -1368,15 +1376,15 @@ class GraphVizGen:
def _generateBody(self):
gv_funcs = []
array_func_list = [] # structs for which we'll generate an array version of their print function
array_func_list.append('xgl_buffer_view_attach_info')
array_func_list.append('xgl_image_view_attach_info')
array_func_list.append('xgl_sampler_image_view_info')
array_func_list.append('xgl_descriptor_type_count')
array_func_list.append('vk_buffer_view_attach_info')
array_func_list.append('vk_image_view_attach_info')
array_func_list.append('vk_sampler_image_view_info')
array_func_list.append('vk_descriptor_type_count')
# For first pass, generate prototype
for s in sorted(self.struct_dict):
gv_funcs.append('char* %s(const %s* pStruct, const char* myNodeName);\n' % (self._get_gv_func_name(s), typedef_fwd_dict[s]))
if s.lower().strip("_") in array_func_list:
if s.lower().strip("_") in ['xgl_buffer_view_attach_info', 'xgl_image_view_attach_info']:
if s.lower().strip("_") in ['vk_buffer_view_attach_info', 'vk_image_view_attach_info']:
gv_funcs.append('char* %s_array(uint32_t count, const %s* const* pStruct, const char* myNodeName);\n' % (self._get_gv_func_name(s), typedef_fwd_dict[s]))
else:
gv_funcs.append('char* %s_array(uint32_t count, const %s* pStruct, const char* myNodeName);\n' % (self._get_gv_func_name(s), typedef_fwd_dict[s]))
@ -1461,7 +1469,7 @@ class GraphVizGen:
gv_funcs.append(" return str;\n}\n")
if s.lower().strip("_") in array_func_list:
ptr_array = False
if s.lower().strip("_") in ['xgl_buffer_view_attach_info', 'xgl_image_view_attach_info']:
if s.lower().strip("_") in ['vk_buffer_view_attach_info', 'vk_image_view_attach_info']:
ptr_array = True
gv_funcs.append('char* %s_array(uint32_t count, const %s* const* pStruct, const char* myNodeName)\n{\n char* str;\n char tmpStr[1024];\n' % (self._get_gv_func_name(s), typedef_fwd_dict[s]))
else:
@ -1495,7 +1503,7 @@ class GraphVizGen:
# Add function to dynamically print out unknown struct
gv_funcs.append("char* dynamic_gv_display(const void* pStruct, const char* nodeName)\n{\n")
gv_funcs.append(" // Cast to APP_INFO ptr initially just to pull sType off struct\n")
gv_funcs.append(" XGL_STRUCTURE_TYPE sType = ((XGL_APPLICATION_INFO*)pStruct)->sType;\n")
gv_funcs.append(" VK_STRUCTURE_TYPE sType = ((VK_APPLICATION_INFO*)pStruct)->sType;\n")
gv_funcs.append(" switch (sType)\n {\n")
for e in enum_type_dict:
if "_STRUCTURE_TYPE" in e:
@ -1503,13 +1511,13 @@ class GraphVizGen:
struct_name = v.replace("_STRUCTURE_TYPE", "")
print_func_name = self._get_gv_func_name(struct_name)
# TODO : Hand-coded fixes for some exceptions
#if 'XGL_PIPELINE_CB_STATE_CREATE_INFO' in struct_name:
# struct_name = 'XGL_PIPELINE_CB_STATE'
if 'XGL_SEMAPHORE_CREATE_INFO' in struct_name:
struct_name = 'XGL_SEMAPHORE_CREATE_INFO'
#if 'VK_PIPELINE_CB_STATE_CREATE_INFO' in struct_name:
# struct_name = 'VK_PIPELINE_CB_STATE'
if 'VK_SEMAPHORE_CREATE_INFO' in struct_name:
struct_name = 'VK_SEMAPHORE_CREATE_INFO'
print_func_name = self._get_gv_func_name(struct_name)
elif 'XGL_SEMAPHORE_OPEN_INFO' in struct_name:
struct_name = 'XGL_SEMAPHORE_OPEN_INFO'
elif 'VK_SEMAPHORE_OPEN_INFO' in struct_name:
struct_name = 'VK_SEMAPHORE_OPEN_INFO'
print_func_name = self._get_gv_func_name(struct_name)
gv_funcs.append(' case %s:\n' % (v))
gv_funcs.append(' return %s((%s*)pStruct, nodeName);\n' % (print_func_name, struct_name))
@ -1565,17 +1573,20 @@ def main(argv=None):
#print(enum_val_dict)
#print(typedef_dict)
#print(struct_dict)
prefix = os.path.basename(opts.input_file).strip(".h")
if prefix == "vulkan":
prefix = "vk"
if (opts.abs_out_dir is not None):
enum_sh_filename = os.path.join(opts.abs_out_dir, os.path.basename(opts.input_file).strip(".h")+"_enum_string_helper.h")
enum_sh_filename = os.path.join(opts.abs_out_dir, prefix+"_enum_string_helper.h")
else:
enum_sh_filename = os.path.join(os.getcwd(), opts.rel_out_dir, os.path.basename(opts.input_file).strip(".h")+"_enum_string_helper.h")
enum_sh_filename = os.path.join(os.getcwd(), opts.rel_out_dir, prefix+"_enum_string_helper.h")
enum_sh_filename = os.path.abspath(enum_sh_filename)
if not os.path.exists(os.path.dirname(enum_sh_filename)):
print("Creating output dir %s" % os.path.dirname(enum_sh_filename))
os.mkdir(os.path.dirname(enum_sh_filename))
if opts.gen_enum_string_helper:
print("Generating enum string helper to %s" % enum_sh_filename)
enum_vh_filename = os.path.join(os.path.dirname(enum_sh_filename), os.path.basename(opts.input_file).strip(".h")+"_enum_validate_helper.h")
enum_vh_filename = os.path.join(os.path.dirname(enum_sh_filename), prefix+"_enum_validate_helper.h")
print("Generating enum validate helper to %s" % enum_vh_filename)
eg = EnumCodeGen(enum_type_dict, enum_val_dict, typedef_fwd_dict, os.path.basename(opts.input_file), enum_sh_filename, enum_vh_filename)
eg.generateStringHelper()

View File

@ -1,6 +1,6 @@
#!/usr/bin/env python3
#
# XGL
# VK
#
# Copyright (C) 2014 LunarG, Inc.
#
@ -30,8 +30,8 @@ import sys
import xgl
def generate_get_proc_addr_check(name):
return " if (!%s || %s[0] != 'x' || %s[1] != 'g' || %s[2] != 'l')\n" \
" return NULL;" % ((name,) * 4)
return " if (!%s || %s[0] != 'v' || %s[1] != 'k')\n" \
" return NULL;" % ((name,) * 3)
class Subcommand(object):
def __init__(self, argv):
@ -64,7 +64,7 @@ class Subcommand(object):
return """/* THIS FILE IS GENERATED. DO NOT EDIT. */
/*
* XGL
* Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
@ -111,7 +111,7 @@ class LoaderEntrypointsSubcommand(Subcommand):
def _generate_object_setup(self, proto):
method = "loader_init_data"
cond = "res == XGL_SUCCESS"
cond = "res == VK_SUCCESS"
if "Get" in proto.name:
method = "loader_set_data"
@ -146,18 +146,17 @@ class LoaderEntrypointsSubcommand(Subcommand):
for proto in self.protos:
if not self._is_dispatchable(proto):
continue
func = []
obj_setup = self._generate_object_setup(proto)
func.append(qual + proto.c_func(prefix="xgl", attr="XGLAPI"))
func.append(qual + proto.c_func(prefix="vk", attr="VKAPI"))
func.append("{")
# declare local variables
func.append(" const XGL_LAYER_DISPATCH_TABLE *disp;")
func.append(" const VK_LAYER_DISPATCH_TABLE *disp;")
if proto.ret != 'void' and obj_setup:
func.append(" XGL_RESULT res;")
func.append(" VK_RESULT res;")
func.append("")
# active layers before dispatching CreateDevice
@ -169,7 +168,7 @@ class LoaderEntrypointsSubcommand(Subcommand):
# get dispatch table and unwrap GPUs
for param in proto.params:
stmt = ""
if param.ty == "XGL_PHYSICAL_GPU":
if param.ty == "VK_PHYSICAL_GPU":
stmt = "loader_unwrap_gpu(&%s);" % param.name
if param == proto.params[0]:
stmt = "disp = " + stmt
@ -217,8 +216,8 @@ class DispatchTableOpsSubcommand(Subcommand):
super().run()
def generate_header(self):
return "\n".join(["#include <xgl.h>",
"#include <xglLayer.h>",
return "\n".join(["#include <vulkan.h>",
"#include <vkLayer.h>",
"#include <string.h>",
"#include \"loader_platform.h\""])
@ -231,16 +230,16 @@ class DispatchTableOpsSubcommand(Subcommand):
stmts.append("table->%s = gpa; /* direct assignment */" %
proto.name)
else:
stmts.append("table->%s = (xgl%sType) gpa(gpu, \"xgl%s\");" %
stmts.append("table->%s = (vk%sType) gpa(gpu, \"vk%s\");" %
(proto.name, proto.name, proto.name))
stmts.append("#endif")
func = []
func.append("static inline void %s_initialize_dispatch_table(XGL_LAYER_DISPATCH_TABLE *table,"
func.append("static inline void %s_initialize_dispatch_table(VK_LAYER_DISPATCH_TABLE *table,"
% self.prefix)
func.append("%s xglGetProcAddrType gpa,"
func.append("%s vkGetProcAddrType gpa,"
% (" " * len(self.prefix)))
func.append("%s XGL_PHYSICAL_GPU gpu)"
func.append("%s VK_PHYSICAL_GPU gpu)"
% (" " * len(self.prefix)))
func.append("{")
func.append(" %s" % "\n ".join(stmts))
@ -259,14 +258,14 @@ class DispatchTableOpsSubcommand(Subcommand):
lookups.append("#endif")
func = []
func.append("static inline void *%s_lookup_dispatch_table(const XGL_LAYER_DISPATCH_TABLE *table,"
func.append("static inline void *%s_lookup_dispatch_table(const VK_LAYER_DISPATCH_TABLE *table,"
% self.prefix)
func.append("%s const char *name)"
% (" " * len(self.prefix)))
func.append("{")
func.append(generate_get_proc_addr_check("name"))
func.append("")
func.append(" name += 3;")
func.append(" name += 2;")
func.append(" %s" % "\n ".join(lookups))
func.append("")
func.append(" return NULL;")
@ -286,7 +285,7 @@ class IcdDummyEntrypointsSubcommand(Subcommand):
self.prefix = self.argv[0]
self.qual = "static"
else:
self.prefix = "xgl"
self.prefix = "vk"
self.qual = "ICD_EXPORT"
super().run()
@ -295,14 +294,14 @@ class IcdDummyEntrypointsSubcommand(Subcommand):
return "#include \"icd.h\""
def _generate_stub_decl(self, proto):
return proto.c_pretty_decl(self.prefix + proto.name, attr="XGLAPI")
return proto.c_pretty_decl(self.prefix + proto.name, attr="VKAPI")
def _generate_stubs(self):
stubs = []
for proto in self.protos:
decl = self._generate_stub_decl(proto)
if proto.ret != "void":
stmt = " return XGL_ERROR_UNKNOWN;\n"
stmt = " return VK_ERROR_UNKNOWN;\n"
else:
stmt = ""
@ -340,7 +339,7 @@ class IcdGetProcAddrSubcommand(IcdDummyEntrypointsSubcommand):
body.append("{")
body.append(generate_get_proc_addr_check(gpa_pname))
body.append("")
body.append(" %s += 3;" % gpa_pname)
body.append(" %s += 2;" % gpa_pname)
body.append(" %s" % "\n ".join(lookups))
body.append("")
body.append(" return NULL;")
@ -350,7 +349,7 @@ class IcdGetProcAddrSubcommand(IcdDummyEntrypointsSubcommand):
class LayerInterceptProcSubcommand(Subcommand):
def run(self):
self.prefix = "xgl"
self.prefix = "vk"
# we could get the list from argv if wanted
self.intercepted = [proto.name for proto in self.protos
@ -363,7 +362,7 @@ class LayerInterceptProcSubcommand(Subcommand):
super().run()
def generate_header(self):
return "\n".join(["#include <string.h>", "#include \"xglLayer.h\""])
return "\n".join(["#include <string.h>", "#include \"vkLayer.h\""])
def generate_body(self):
lookups = []
@ -385,7 +384,7 @@ class LayerInterceptProcSubcommand(Subcommand):
body.append("{")
body.append(generate_get_proc_addr_check("name"))
body.append("")
body.append(" name += 3;")
body.append(" name += 2;")
body.append(" %s" % "\n ".join(lookups))
body.append("")
body.append(" return NULL;")
@ -423,7 +422,7 @@ class WinDefFileSubcommand(Subcommand):
return """; THIS FILE IS GENERATED. DO NOT EDIT.
;;;; Begin Copyright Notice ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
; XGL
; Vulkan
;
; Copyright (C) 2015 LunarG, Inc.
;
@ -458,7 +457,7 @@ class WinDefFileSubcommand(Subcommand):
for proto in self.protos:
if self.exports and proto.name not in self.exports:
continue
body.append(" xgl" + proto.name)
body.append(" vk" + proto.name)
return "\n".join(body)

File diff suppressed because it is too large Load Diff

992
xgl.py Normal file → Executable file

File diff suppressed because it is too large Load Diff