Akeana’s system IP is a comprehensive suite of component IP blocks, which includes a compute coherence block (CCB), both coherent and non-coherent interconnect, IOMMU, and interrupt controllers. This sophisticated system IP offers customers the opportunity to create complete and customized solutions, when combined with our CPUs.
Moreover, Akeana stands out as the only vendor that can offer such a wide range of cores and system IP, making it the ideal choice for customers who require flexibility and scalability in their designs. By leveraging Akeana’s system IP, our customers can achieve optimal performance, reliability, and versatility in their system designs.
- Compute Coherence Block (CCB)
- Input–Output Memory Management Unit (IOMMU)
- Non-Coherent Interconnect Fabric
- AkeanaMesh (Coherent Interconnect Fabric)
- Advanced Interrupt Architecture (AIA)
Compute Coherence Block (CCB)
The Compute Coherency Block (CCB) connects a cluster of up to 8 cores coherently, using a directory-based protocol. The CCB also includes a cache that is shared by the core cluster. This localized coherency domain can interface with the non-coherent Interconnect or coherent Interconnect IP using the industry standard AMBA AXI (non-coherent) or CHI (coherent) protocols, respectively. The CCB provides RAS support in the form of ECC/parity protected caches with error reporting along with asynchronous interfaces for DVFS functionality.
Input–Output Memory Management Unit (IOMMU)
<span data-metadata=""><span data-buffer="">Non-Coherent Interconnect Fabric
AMBA AXI is a compatible Interconnect Fabric that provides the capability to interconnect multiple Akeana cores. This is Akeana’s Interconnect IP that enables customers to quickly implement a non-coherent multi-core solution.
<span data-metadata=""><span data-buffer="">AkeanaMesh (Coherent Interconnect Fabric)
Advanced Interrupt Architecture (AIA)
Akeana provides all the IP for all interrupt control and management. These include the RISC-V APLIC (advanced platform-level interrupt controller), the RISC-V ACLINT (advanced core level interrupt controller), as well as the IMSIC (incoming message signaled interrupt controller) to handle both wired and MSIs.
Foundation for Successful AI
Akeana 1000 Series and 5000 Series have the option to support AI acceleration features for optimized FrameWork and Vector Computation.
- AI data type support for Integer, Floating Point, and Block Data Representations.
- AI instructions for improved Vector Computation on key algorithms, such as activation functions.
- Systolic Array architecture for optimized data computation, & throughput for Weight and Activation data.
- Broadest range of data types supporting Integer, Floating Point, and Block Data Representations at different sizes.
- Akeana 1000 Series and Akeana Matrix Engine access shared cache in parallel, with dedicated interfaces for optimum performance and data flow paths for FrameWork and Matrix execution.
- AkeanaMesh allows scaling to large coherent, multi-core systems, which provides up to PetaFLOPs level of performance.
Akeana offers a full software stack, allowing customers to quickly run Neural Network models on their SoC implementation using Akeana IP
- Akeana Neural Network software library with performance optimized low level functions.
- Connection to a range of OpenSource Neural Network Compilers, supporting a broad range of NN models and FrameWorks.
1. FrameWork Computation
Essential for fundamental operation & synchronization
Multi-core management and data management under Operating System (Linux)
2. Vector Computation
Performance acceleration with Vector data
Flexibility to cover range of AI activation & pre and post processing functions
3. Matrix Computation
Over 50% of computation is Matrix Multiply
Offload to Matrix Engine
Support for advanced datatypes & future proofing designs
4. Interconnect
Optimum performance, with parallel execution on multiple computation units
Scale performance with multi-core systems