Tensor Computing Processor BM1684
Datacenter
Tensor Computing Processor BM1684

SOPHON BM1684 is the third-generation tensor processor launched by SOPHGO for deep learning. Its performance has been improved by 6 times compared with the previous generation.

$0.10
Wide Application and Scenarios
Wide Application and Scenarios
Deep Learning Performance

The edge computing deep learning processor BM1684 can be used in artificial intelligence, machine vision and high performance computing environments.

AI Accelerator
AI Accelerator
INT8 & FP32 Precision Support
  • Support INT8 and FP32 precision, greatly improving deep learning performance
  • Integrated high-performance ARM core, supporting secondary development
  • Integrated video and image decoding and encoding capabilities
  • Support PCIe and Ethernet interface
  • Support TensorFlow, Caffe and other mainstream frameworks
Convenient and Efficient
Convenient and Efficient
Easy-to-use

SOPHON SDK one-stop toolkit provides a series of software tools including the underlying driver environment, compiler and inference deployment tool. The easy-to-use and convenient toolkit covers the model optimization, efficient runtime support and other capabilities required for neural network inference. It provides easy-to-use and efficient full-stack solutions for the development and deployment of deep learning applications. SOPHON SDK minimizes the development cycle and cost of algorithms and software. Users can quickly deploy deep learning algorithms on various deep learning hardware products of SOPHGO to facilitate intelligent applications.

Test1
Test 2
Test 3
Test 4
Origin
Elgota
Amazon
Best Buy
New Egg
Asus
Noon
Stripe
Amazon Pay
Paypal