I will assume that you have already downloaded and installed the appropriate CUDA driver, toolkit and SDK from Nvidia. You can get your hands on Nvidia's beta OpenCL at http://www.nvidia.com/object/cuda_opencl.html. Since we have been working with matrix multiplication in CUDA let’s do the same with OpenCL. We will put together a trivial example of multiplying two 3 X 3 matrices together using OpenCL just like we did with C for CUDA.
The basics of what we need to do has not changed... we need to allocate memory for the two matrices that we are multiplying together… let’s call them A and B. Then we need to copy in the data to the memory that you have allocated. Then we might want to print A and B out. We also need to allocate memory for the results… let’s call it matrix C. The next step would be to perform the multiplication and finally you might want to print out the results and free up the memory you allocated. The main program starts out identically to our C for CUDA version.
Main program (Listing 1)
The code in Listing 1 above has nothing OpenCL specific about it yet. We have put together a randomInit function to generate random floats for our matrices. In section 1 we allocate memory for our matrices. In section 2 we use our randomInit function to generate test data. In section 3 we print out the data that we have initialized our matrices with. Next we allocate memory for the results. Sections 5 through 8 don’t actually have any code yet. We will fill these in as we go. At section 9 we print out our results and then finally we free up all of the resources we allocated. Pretty straightforward… there is nothing special about the code yet.
As I mentioned in my OpenCL Program Structure post with OpenCL you must create an OpenCL context and associate devices, kernels, program objects, memory objects, and a command queue with the context. All of this initialization is done in host code using OpenCL APIs. The host code can then interact with the device by inserting commands onto the command queue. To launch the kernel you simply put a launch command on the command queue. To retrieve your results you put a memory copy command on the command queue requesting that the device memory containing your results be copied back to host memory. So lets start by adding the code for the OpenCL initialization.
Main program (Listing 2)
As we walk through the code many of the functions we will be calling have optional parameters. We will not be going into the details of the optional parameters on the functions that we are using. You can reference http://www.khronos.org/registry/cl/specs/opencl-1.0.43.pdf for a description of all of the parameters.
The first new line is the include for oclUtils.h. This include file is Nvidia specific and contains the code for the shrCheckError( ) method as well as the includes for the standard OpenCL header. Lets skip past the declarations of the OpenCL specific variables to the first OpenCL method we call clCreateContextFromType( ). This function creates on OpenCL context from a device type that identifies the specific device. Since we are offloading to a GPU we specify CL_DEVICE_TYPE_GPU. This will return a cl_context that we will use for the rest of our initialization.
The function shrCheckError( ) is an Nvidia specific function that prints out information on the error and exits. We will be making calls to this function throughout our main program. We then call clGetContextInfo( ) to get a handle to the cl_device_id. We make two calls to clGetContextInfo( ) one to determine how many bytes we need for our cl_device_id (we can have multiple OpenCL capable devices on one host) and the second to initialize our cl_device_id variable. Once we have the id of our device we can then create a cl_command_queue so that we can interact with the device. We do this by making a call to clCreateCommandQueue( ).
At this point we are almost done with our initialization. The only thing left is to initialize the device memory for our matrices and copy over our data. To do this we make calls to clCreateBuffer( ) which is used both to allocate the device memory and optionally initialize it with data from host memory. We use it for both.
Main program (Listing 3)
Now that we have completed the OpenCL initialization listing 3 above contains the code to load and build the kernel. The function oclLoadProgSource( ) is not part of OpenCL. It is a Nvidia specific function that simplifies the loading of your kernel source. The first parameter must contain the complete path to the source file for your kernel. The second parameter is prepended to what is read in (can be useful for adding includes). The last parameter is the address to store the number of bytes read in.
We next call clCreateProgramWithSource( ) which creates a program object for a context, and loads the source code specified by the text strings in the clMatrixMul array into the program object. The devices associated with the program object are the devices associated with context. The first parameter is the context that we initialized, the second is an integer indicating the dimension of the clMatrixMul array, the third is the length of the clMatrixMul array, and the last is for the return of the error code.
Now that we have a valid program object we need to compile or "build" it. This is done in the call to clBuildProgram( ). This function takes a valid program object and the number of devices associated with the context. The remaining parameters are optional. Once we have successfully built our program object we need to associate a kernel object with each function prepended by __kernel in our kernel source file (a single source file can contain multiple __kernel functions in it).
To map a specific __kernel function in our kernel source file with a kernel object inside our program we call clCreateKernel( ) passing in the built program object and the name of the function. This will return a kernel object that is ready to launch.
Main program (Listing 4)
In listing 4 above we add the code to launch the kernel and retrieve the results. Before we can launch the kernel we need to set the parameters for our matrixMul kernel. We haven't looked at the kernel yet but it is almost identical to the C for CUDA version which took 5 parameters. To set the parameters we call clSetKernelArg( ) passing in the kernel, the index of the parameter in the kernel function, the size of the parameter and a pointer to the value. We do this for each parameter we need to pass in. If you have ever done any X Windows programming this tedious manner of passing in parameters should look famillar to you. Oh and let's not forget to release all of the OpenCL resources that we have allocated.
Now that the parameters are set all we need to do is queue up a command to launch the kernel on the command queue that we associated with the context earlier. We do that with a call to clEnqueueNDRangeKernel( ). With C for CUDA we had to set the size of our grid of thread block and the size of each thread block. For our first kernel we have 9 threads (3 X 3 matrices) in a thread block and 1 thread block in the grid. With OpenCL we need to do the same thing the only difference is that the dimensions for the grid in OpenCL are expressed in terms of the total number of threads in the grid. Thats what the two arrays localWorkSize and globalWorkSize are used for.
Once we have launched our kernel we enqueue a command to retrieve the results from device memory with a call to clEnqueueReadBuffer( ). This call will block until the kernel has finished. Speeking of the kernel... what is our kernel going to look like...?
Kernel (Listing 1)
The OpenCL kernel is basicaly identical to the C for CUDA version of the kernel the only differences are really cosmetic. We change the __global__ keyword that CUDA uses for __kernel that OpenCL uses to denote that a function is to be executed on the device. The threadIdx CUDA reference is replaced with calls to OpenCLs get_local_id( ) function and the global memory for our matrices that are passed into the kernel needs to be specified as __global in OpenCL. That's it... otherwise everything else is the same as our CUDA kernel.
Comparing the C for CUDA code to the OpenCL code you can see why we did not use Nvidia's CUDA driver API to start with. There is a significant amount of very mundane code that you have to write with CUDAs driver API and with OpenCL. It made sense for us to go the simpler route with Nvidias C for CUDA because using their CUDA driver API didn't really buy us anything. With OpenCL, on the otherhand, we obtain portability by writing the extra mundane code so it is worth it... at least to me it is.
With the code above you should be able to cut / copy / paste your way to a running binary (don't forget to replace "kernel.cl" in the call to oclLoadProgSource( ) with the complete path to your kernel). You will find that OpenCL does not run this algorithm any faster than CUDA does... performance still sucks! That's OK we will make it better in the next two examples.
On to Matrix Multiplication 2.