Akeana AI Solutions:

Enabling our customers to Lead in AI
SoCs by offering the Broadest range of
AI Performance Compute and Data 

movement IP

Greater than 10x performance increase Non-Linear functions
32 TOPS MatMul performance with > 95% utilization
10 TOPS/W Ternary Weights with Partners CIM Bit Processor
Softmax Acceleration with Non-Linear Instructions, Wide Vectors, Data Movement Engines
4096-bit vector length for ultra performance AI computation
Broadest range of AI datatypes including, BF16, TF32, FP8, MXFP, and MXINT

General Purpose Compute array

For front end software flexibility and execution of range of Applications, for AI and HPC

AI workloads are increasingly mixing general purpose compute and AI computation, especially in host-less use cases

Akeana unique in offering AI accelerated wide interface high performance AI CPUs with Akeana 5000-series (6 to 10-issue wide, Out-of-Order)

AI Accelerated Compute array targeted to optimal AI algorithm computation

Highly optimized Heterogeneous AI data compute engines with parallel data movement architecture. Targeting highest Compute / Watt

AI Control, Companion Core

Software front-end, possibly running an OS. Pre-post processing, Tensor conversion. Can be used for AI data computation with AI acceleration, Vector extensions, and SMT support

AI Vector Engine

Computation beast. Data pushed to core, optimal efficiency in load, execute, store. Fully cycle deterministic. Used for offloading vector computation intensive algorithms.

AI Data Movement engine

Software controlled, multiple channel data movement engine with the option for computation, modification on moving data

  • Parallel execution, data movement between Core and Matrix Engine
  • Banked Shared Memory
  • Ultra wide data interfaces
  • Up to 8 computation units connected to single shared memory

Ready to get Started

with Akeana?

Oops! We could not locate your form.