For companies selling their own processors, Codeplay provides all the performance and programmability solutions that the end customer requires.phone_forwardedContact Us
ComputeAorta is Codeplay’s multi-target, multi-platform framework for rapidly enabling delivery of the OpenCL™, SPIR-V™, Vulkan™ and oneAPI compute standard programming models. It supports Linux®, Windows® and Android™ operating systems across x86, ARM®, and RISC-V® targets, and can be easily customized to your hardware.
DSPs: ComputeAorta has a range of optimizations to support DSPs. This lets software developers quickly bring their GPU code to DSP architectures and gain all the performance and power benefits of optimized DSPs.
ComputeAorta delivers industry standard programming models while enabling all the required higher level frameworks and libraries.
Ensure software developers can use widely adopted open standards to accelerate their applications using your processor.
ComputeCpp is Codeplay's implementation of the SYCL programming model, an open standard from the Khronos® Group. It enables software developers to write standard C++ code using a familiar development environment, and supports a wide range of frameworks for BLAS, DNN, machine learning, and more.
Easy migration from CUDA to SYCL
SYCL is being used in high performance computing and advanced driver assistance systems as an easy route to migrate away from using NVIDIA® GPUs and CUDA. Open up a large AI ecosystem using ComputeCpp, and make it easy for developers using CUDA to move their code to your processor.
|Feature||Do We Support It|
|On-chip SRAM||ComputeAorta private/local/global/constant/scratchpad memory.|
|Multiple Address Spaces||ComputeCpp call-graph-duplication; ComputeAorta address-space management.|
|SIMD instructions||VECZ: Converts GPU-style SPMD code into DSP-style SIMD intrinsics.|
|DMA||ComputeAorta supports various DMA styles internally; ComputeCpp supports DMA accessors and DMA scheduling.|
|Processor-specific Extensions||ComputeAorta & ComputeCpp have extensions for specific types of instruction and can be extended per-device.|
|Matrix multiply or Convolution Units||OpenCL "built-in-kernels" are supported in ComputeAorta for fixed-function accelerator units.|
|Device-specific Graph Compilers||TensorOpt for SYCL integration with TensorFlow & graph compilers. ComputeAorta supports MLIR, SPIR-V, Glow and TVM.|