"AI Chip Giant Hegemony Era: Google, NVIDIA, Intel, Huawei AI Chip Debut Next Week" By Zhang Shaohua Today, Yu Chengdong, Huawei's senior vice president, released a video on Weibo to build anticipation for Huawei's upcoming AI chip. He stated, "The pursuit of speed never ends with imagination," predicting that the AI chip will be unveiled on September 2 at IFA 2017. At Huawei's mid-year performance media communication meeting last month, Yu Chengdong revealed that they would release AI chips this fall. Huawei will also be the first company to introduce an artificial intelligence processor in smartphones. Additionally, at the 2017 China Internet Conference, Yu Chengdong mentioned that the chips produced by Huawei's HiSilicon division will integrate CPU, GPU, and AI functionalities, potentially based on the new AI chip design introduced by ARM at Computex this year. According to Yu Chengdong's video today, Huawei's AI processor is expected to significantly enhance the data processing speed of the Kirin 970. If the AI chip can be implemented in the Huawei Mate 10, set to launch in October, the data processing capabilities of Huawei Mate 10 will be quite impressive. Similar to Huawei, global tech giants like Intel, Lenovo, NVIDIA, Google, and Microsoft are all actively embracing AI, with the development of AI chips becoming a top priority. Intel In an interview with Xinzhijie Media this month, Song Jiqiang, director of Intel's China Research Institute, emphasized the importance of AI chips, stating that technology is needed to process vast amounts of data to create value for customers. In this process, the chip plays an indispensable role: By 2020, it is conservatively estimated that 50 billion devices will be interconnected globally. Future data will come from various device terminals, no longer relying solely on human actions like calling, playing mobile games, or sending emails. Autonomous vehicles, smart homes, and surveillance cameras are all generating data. Each autonomous vehicle generates over 4,000 gigabytes of data daily, and these data cannot be fully transmitted via 5G. Therefore, a significant amount of data must be processed and analyzed locally before selective uploads. Locally, advanced technologies beyond modern server technology will be required. As a traditional chip manufacturer, Intel launched a new generation of Xeon server chips in July this year, boasting dramatic performance improvements and deep learning capabilities 2.2 times greater than the previous generation of servers. It can handle training and reasoning tasks. Additionally, Intel demonstrated Field Programmable Gate Array (FPGA) technology, which will play a crucial role in the future of AI. Intel also plans to release the Lake Crest processor, aimed at deep learning code. Lenovo According to Lenovo Group President Yang Yuanqing, "The AI general-purpose processor chip is the strategic commanding height in the age of artificial intelligence," as noted by He Zhiqiang, senior vice president of Lenovo Group and president of Lenovo Venture Capital Group. In the era of smart internet, AI chips are the engines of artificial intelligence and will play a decisive role in the development of the intelligent internet. Just last week, Lenovo Capital Partners and top investors such as Alibaba Ventures jointly invested in Cambricon Technologies, known as the "world's first unicorn chip startup." NVIDIA Over the past few years, NVIDIA has shifted its focus to AI and deep learning. In May of this year, NVIDIA released a heavyweight processor for artificial intelligence applications: the Tesla V100. This chip has 21 billion transistors, far more powerful than the Pascal processor with 15 billion transistors released by NVIDIA last year. Despite being as small as the face of an Apple Watch, it features 5,120 CUDA (Compute Unified Device Architecture) processing cores and double-precision floating-point performance of 7.5 trillion calculations per second. NVIDIA CEO Jensen Huang stated that NVIDIA spent $3 billion to develop this chip, priced at $149,000. Google Google, which announced its strategic shift to "AI first," released last year a TPU (Tensor Processing Unit) specifically tailored for machine learning. Compared to CPUs and GPUs, TPU efficiency has increased by 15-30 times, and energy consumption has decreased by 30-80 times. At Google's Developers Conference in May this year, Google announced a new product, the Cloud TPU, featuring four processing chips capable of completing 180 teraflops of computing tasks per second. Connecting 64 Cloud TPUs together forms a supercomputer Google calls a Pod, with a computational power of 11.5 petaflops (1 petaflop equals 10^15 floating-point operations per second) – a very important foundational tool for research in the AI field. Currently, TPUs have been deployed across nearly all of Google's products, including Google Search, Google Assistant, and even in the Go battles between AlphaGo and Lee Sedol, where TPU played a pivotal role. Microsoft Last month, media reports indicated that Microsoft will include an independently designed AI coprocessor in the next-generation HoloLens, enabling the device to analyze the content users see and hear without wasting time transferring data to the cloud for processing. This AI chip is currently under development and will be integrated into the next-generation HoloLens Holographic Processing Unit (HPU). Microsoft stated that the AI coprocessor will be Microsoft's first chip designed for mobile devices. In recent years, Microsoft has been heavily investing in developing its own AI chip: it created an action-tracking processor for the Xbox Kinect gaming system; to compete with Google and Amazon in cloud services, Microsoft customized a field-programmable gate array (FPGA); additionally, Microsoft purchased programmable chips from Altera, an Intel subsidiary, to write custom software to meet demands. Last year, Microsoft used thousands of AI chips at a conference to translate all English Wikipedia articles into Spanish, totaling about 5 million articles, completed in less than 0.1 seconds. Moving forward, Microsoft aims to enable customers using the Microsoft cloud to perform tasks through AI chips, such as identifying images from massive datasets or predicting consumer purchasing models through machine learning algorithms."

Weigh Modules

Load Cell Module,Weighing Module,Weigh Modules Load Cells,Module Weighing

Xiaogan Yueneng Electronic Technology Co., Ltd. , https://www.xgsensor.com

Posted on