How To Use Dax In Your Project

From Daxtoolkit
Jump to: navigation, search

Using Dax from your own project is easy, especially if you use CMake for your project. This page provides some basic documentation to get you up and running with Dax.

Before beginning with these instructions, please have Dax already somewhere on your system. If you want to download the source and configure it especially for your system, follow the directions under Building the Dax Toolkit. Or you can get a packaged version of a Dax install and place it on your system.

Basic CMake setup

Let us say that we start with an empty directory in which we are going to create a project that relies on Dax. We first create a CMakeLists.txt that defines the project. Here is a simple CMakeLists.txt that defines the project, finds and configures Dax, and creates targets to build a program.

# Required (sort of) for all cmake files
cmake_minimum_required(VERSION 2.8)
 
# All CMake projects start with this command.
project(daxexample CXX)
 
# This command will find the Dax configuration. It will ask you for the
# location of a file named DaxConfig.cmake, which is either in the Dax build
# directory or in the include directory of an install.
find_package(Dax REQUIRED)
 
# This will load in all the necessary configuration for using Dax with OpenMP.
# Because we specified REQUIRED, CMake will error if the configuration could
# not be loaded.
DaxConfigureOpenMP(REQUIRED)
 
# Now just create an executable like normal.
add_executable(daxexample_openmp ex1.cxx)

I've commented the CMake configuration to make it easy to follow what each of these steps does. This particular configuration compiles Dax for OpenMP, but we will get to a CUDA example in a little bit.

To complete the example, here is an example of ex1.cxx, a very simply program that creates a mesh and runs a parallel operation on it.

// Boost gives a bunch of warnings with nvcc if you don't specify how shared
// pointers should handle threads. Dax does not care (it is too careful about
// threading to cause hazards in shared pointers), but your code might. Thus,
// you should specify one when compiling with nvcc. If your code does not share
// shared pointers among threads, then you can just disable them as below.
// (BTW, if you forget to set this, Dax will give its own descriptive message
// with instructions on how to fix.)
#define BOOST_SP_DISABLE_THREADS
 
// You can specify which backend Dax will use by uncommenting one of these
// lines.  Or you can use none of these defines and let Dax pick the "best"
// backend available.
//#define DAX_DEVICE_ADAPTER DAX_DEVICE_ADAPTER_SERIAL
//#define DAX_DEVICE_ADAPTER DAX_DEVICE_ADAPTER_OPENMP
//#define DAX_DEVICE_ADAPTER DAX_DEVICE_ADAPTER_CUDA
 
#include <iostream>
#include <iomanip>
#include <vector>
using namespace std;
 
//Dax includes
#include <dax/cont/DeviceAdapter.h>
#include <dax/cont/Schedule.h>
#include <dax/cont/UniformGrid.h>
#include <dax/cont/worklet/Magnitude.h>
 
int main(int, char*[])
{
  //Define the number of points to place on each dimension of our grid.
  const dax::Id GridSize = 2;
 
  //Create a uniform grid upon which we will perform computations. Make it
  //2x2x2, with the origin at (0,0,0) and the farthest extent at (1,1,1).
  dax::cont::UniformGrid<> grid;
  grid.SetOrigin(dax::make_Vector3(0.0, 0.0, 0.0));
  grid.SetSpacing(dax::make_Vector3(1.0, 1.0, 1.0));
  grid.SetExtent(dax::make_Id3(0, 0, 0),
                 dax::make_Id3(GridSize-1, GridSize-1, GridSize-1));
 
  //Generate a scalar handle to hold the distances from the origin
  dax::cont::ArrayHandle<dax::Scalar> distancesFromOrigin;
 
  //Calculate the magnitude of the component.
  dax::cont::Scheduler<>().Invoke(dax::worklet::Magnitude(),
                                  grid.GetPointCoordinates(),
                                  distancesFromOrigin);
 
  // Get the distances computed and print them out next to the coordinates.
  const unsigned int numPoints = grid.GetNumberOfPoints();
  vector<double> Distances(numPoints);
  distancesFromOrigin.CopyInto(Distances.begin());
  for(int pointIndex = 0; pointIndex < numPoints; pointIndex++)
    {
    dax::Vector3 Coords = grid.ComputePointCoordinates(pointIndex);
    cout << "(" << setw(3) << fixed << setprecision(1) << Coords[0] << ", "
         << setw(3) << fixed << setprecision(1) << Coords[1] << ", "
         << setw(3) << fixed << setprecision(1) << Coords[2] << ")  "
         << setprecision(5) << setw(14) << Distances[pointIndex] << endl;
    }
 
  return 0;
}

Set up a CUDA build and other CMake options

Setting up a CUDA build is not much different than setting up an OpenMP build like the one shown above. The only real differences are calling DaxConfigureCuda instead of DaxConfigureOpenMP and using the special commands for compiling cuda (e.g. cuda_add_executable in place of add_executable). The following CMakeLists.txt extends the previous one to add a CUDA compile and also demonstrate some other miscellaneous features.

# Required (sort of) for all cmake files
cmake_minimum_required(VERSION 2.8)
 
# All CMake projects start with this command.
project(daxexample CXX)
 
# This command will find the Dax configuration. It will ask you for the
# location of a file named DaxConfig.cmake, which is either in the Dax build
# directory or in the include directory of an install.
find_package(Dax REQUIRED)
 
# This will load in all the necessary configuration for using Dax with OpenMP.
# Because we specified REQUIRED, CMake will error if the configuration could
# not be loaded.
DaxConfigureOpenMP(REQUIRED)
 
# Now just create an executable like normal.
add_executable(daxexample_openmp ex1.cxx)
 
# This will load in all the necessary configuration for compiling with Cuda.
# This time we will allow the load to fail and check for an error.
DaxConfigureCuda()
 
# If the configuration is successful, then Dax_Cuda_FOUND is set to a true
# value. Likewise all the DaxConfigure<Device> macros will set a
# Dax_<Device>_Found variable on success.
if (Dax_Cuda_FOUND)
  # We can compile the exact same Dax code to use on Cuda. To demonstrate this,
  # let's copy the ex1.cxx to ex1.cu (CMake only compiles .cu files with the
  # Cuda nvcc copiler).
  configure_file(ex1.cxx ${CMAKE_BINARY_DIR}/ex1.cu COPYONLY)
 
  # Now create an executable (with Cuda) per standard CMake convention.
  cuda_add_executable(daxexample_cuda ${CMAKE_BINARY_DIR}/ex1.cu)
endif (Dax_Cuda_FOUND)

In addition to showing how to build a CUDA program with Dax, we also demonstrated how to detect when a particular Dax configuration is not available (for example, if CUDA could not be found on your computer) and alter the build accordingly. If we wanted to force the configuration to use CUDA or fail, we could simply have REQUIRED the CUDA configuration.

DaxConfigureCuda(REQUIRED)
cuda_add_executable(daxexample_cuda ${CMAKE_BINARY_DIR}/ex1.cu)

Using other build systems

We have made it as easy as possible to use Dax with CMake, but it is straightforward to use Dax with other build systems as well. Dax is really just a header file library, so using Dax is as simply as pointing your compiler to the include directory where Dax is installed. (We do not recommend trying to link a project to the Dax source without CMake. It is more complicated.)

Of course, you will still have to find and configure Dax's dependent libraries (such as Boost, Thrust, OpenMP, and CUDA). That will be up to your independent build system.

Acknowledgements

Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

SandiaLogo.png DOELogo.png

SAND 2012-7422P