-
Notifications
You must be signed in to change notification settings - Fork 171
Pointer Arithmetic issue #21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hello Rasmus. Thank you for filing this issue, I will have someone from the Intel Graphics Compiler team look into this issue shortly. One thing that might be helpful and expedite this process is if you could provide me a production driver label in which your code is working as expected. |
Hello Paigeale I am sorry, but it was a pain to get my locked down work PC to even upgrade to the newest driver so i could reproduce the issue on my own machine. I can therefore not downgrade agin. But i can say that it works as expected compiled for the Intel CPU both in 32 bit and 64 bit, and works when compiled for 32 bit on the integrated graphics. It also have the same behaviour when run on Nvidia, so i dont think the code uses some undefined behaviour which randomly changed behaviour. |
Hello Rasmus, I was successfully able to reproduce what you are seeing on our legacy compiler. Currently this is not an issue in our newest compiler (intel-graphics-compiler) which is open sourced here. At this point there is not a plan to fix this issue but I would be happy to work with you on coming up with the necessary workarounds (seems like though you already have one here). Please feel free to contact me for any further questions (email in profile). |
Closing issue as "Not to be Fixed" for it is a legacy driver an no longer supported |
(Most of this is a copy from https://software.intel.com/en-us/comment/1926881)
I have just tracked down a bug in a opencl kernel i have written. The code had been working fine until one of the users got a graphics driver update (versione 20.19.15.4835).
The code had worked for about 1 year on a wide assortment of CPU's and integrated and dedicated GPU's, both when compiled with x64 and x86. The old code still works on the CPU when compiled with either x64 or x86, and on the integrated gpu when compiled with x86. But when run on integrated graphics cards, with the newest driver, in x64 mode, it failes.
i have been able to track it down to this line of code:
'global float* inputCPtr = inputC + turbines * windDirIndex;
out[gid] = inputCPtr[inputA[gid]];'
Seemingly randomly, this line would return 0 instead of the content in xCoords. Changing the code to the following fixes the bug.
global float* inputCPtr = inputC + turbines * windDirIndex;
out[gid] = inputC[turbines * windDirIndex + gid];'
full, (quite) minimal example below:
The vector "test" should contain all 1's, but on the HD 5500 with the newest driver, compiled in visual studio on 64 bit on windows 8, it contains a mix of 1's and 0's, seemingly in blocks of a multiple of 8.
The text was updated successfully, but these errors were encountered: