fbpx

BrainChip Holdings (ASX: BRN) – Taking Deep Learning to Higher Levels

Article written by Pitt Street Research. For the full report click here

BrainChip Holdings Limited (ASX:BRN) is an ASX-listed semiconductor company that is currently in the early stages of commercializing Akida, its Neuromorphic System-on-a-Chip (NSOC). By integrating the Akida technology into their products, BRN’s current and prospective customers can bring the benefits of one of today’s most advanced technologies in Artificial Intelligence (AI) to their end-customers.

Addressing a market expected to grow to US$ 66BN by 2025

The market for Deep Learning chipsets is expected to grow from around US$ 4BN in 2018 to more than US$ 66BN by 2025, according to Tractica, implying a CAGR of nearly 49%. Within this total market, BRN aims to sell dedicated Neuromorphic System-on-Chip (NSoC) devices and, more selectively, Intellectual Property (IP) blocks. Most of the chips used in AI applications being sold today are general-purpose chips, such as Graphics Processing Units (GPUs). Key target markets for BRN’s NSoCs are primarily vision systems such as surveillance cameras, Advanced Driver Assistance Systems (ADAS) and Autonomous Vehicles (AV), vision guided robotics, drones, and Industrial Internet of Things (IIoT). More specialized edge applications such as cell phones are addressed with IP blocks.

Akida enables hardware-based Neuromorphic Computing

BRN has developed a hardware version of the biological neuron, like the ones found in the human brain, and has packed 1.2M of these artificial neurons and the accompanying 10BN artificial synapses on a neuromorphic computer chip called the Akida NSoC. The architecture and behavior of this chip are similar to that of a biological neuron, but implemented using a mainstream digital logic process.

The Akida technology is what is known as a Spiking Neural Network (SNN), i.e. similar to the human brain, Akida processes spikes or events instead of data. The advantage of processing spikes is that it can be done in an eventdriven manner, compared to how “traditional” software-based neural networks, such as Convolutional Neural Networks (CNN), process data. The result is that the event driven network only processes data and consumes power when events are present as opposed to CNNs which consume power by processing all the input data, continuously.
This means SNNs are much faster and require only a fraction of the power consumed by CNNs. Additionally, BRN expects to commercialize the Akida NSoC at a substantially lower price point.

 

Commercialization of Akida has already started

BRN has been working on the development of the Akida technology for ten years and anticipates it will have the technology available in test chips for prospective customers in the second half of 2019. However, commercialization of Akida has already started with BRN recently having released the Akida Development Environment (ADE), a software version of Akida that customers and prospects can use to create, train and test neural networks destined for the Akida NSoC as well as run inference (outcome-based processing) to determine the performance and accuracy of the neural network.

Substantial pipeline of prospects

Per November 2018, BRN was involved in 21 active or committed pilot projects for Akida and its revenue-generating software-based SNN, BrainChip Studio. In addition, BRN has scored a large
number of design wins and had more than 500 leads in its commercial pipeline. Upon successful completion of transferring the Akida IP from lab to fab, expected in the second half of 2019, we believe BRN should be in a position to secure a variety of different customer types for Akida, including cell phone manufacturers, semiconductor foundries, Automotive OEMs for Advanced Driver Assistance Systems (ADAS) and Autonomous Vehicles, third-party semiconductor IP providers, IDMs (Integrated Device Manufacturers) and companies in the Imaging space. Given that BRN will be commercializing the Akida NSoC through both device sales and an asset-light IP licensing model with recurring royalty revenues, we anticipate high gross margins.

Valuation range of A$ 0.40-0.45 if Akida NSoC shows commercial viability

Looking at where Akida currently is in its development/commercialization process, we are of the opinion that BRN’s share price does not accurately reflect the company’s commercial
potential described above. In our view, the company’s valuation could move towards levels seen for comparable companies, such as Nervana and Movidius that were acquired by Intel Corp (NASDAQ:INTC), i.e. equivalent to A$ 0.43 per fully diluted BRN share. Similarly, we believe AudioPixels’ (ASX:AKP) A$ 560M valuation, equivalent to A$ 0.44 per fully diluted BRN share, also provides a gauge to BRN’s potential valuation, given where AKP is positioned in its development process. In the near to medium term, though, we believe BRN will first need to demonstrate commercial viability of the Akida NSoC by signing one or more commercial agreements that involve integration of Akida and/or components of the Akida IP stack into customers’ designs.

Given the strong upside we can potentially see for the BRN’s share price if Akida is commercially validated, we start our coverage with a Speculative Buy recommendation.

Near-term share price catalysts

• Conversion of current prospect and discussion partners into paying customers and development partners, e.g. to develop and deploy Akida for specific applications such as for
cell phones and ADAS.

• Updates on the Akida NSoC commercialization roadmap, specifically around first silicon test results.

• Additional customers for BrainChip Studio, especially in the Casino and Law Enforcement sectors.

• One of the Top-20 shareholders, Metals X Limited, seems to have largely sold its position on market recently. With selling pressure from Metals X now gone, we may see a bounce in BRN in the short term.

Artificial Intelligence comes in different shapes and forms

Artificial Intelligence (AI) is the science of training systems to perform human tasks through learning and automation. AI makes it possible for machines to learn to apply logic, adjust to new inputs and reason to gain an understanding from complex data. In simple words, AI provides machines with the ability to learn from data it receives by processing and recognizing patterns in that data.

AI is an overarching term and essentially consists of foundational building blocks and key elements, namely machine learning and deep learning, computer vision, natural language processing (NLP), forecasting and optimization, and machine reasoning. These building blocks or elements can be used independently or combined to build AI capability. Several AI capabilities and their use cases in a business context are illustrated in Figure 1.

 

These AI capabilities can be used either independently or combined with each other, depending on users’ objectives and underlying data. For instance, in the banking industry, combinations of these capabilities are used for credit and risk analysis and to provide market recommendations by creating automated financial advisors. In the healthcare industry, such combinations are used for processing data from past case notes, biomedical imaging, health monitoring, etc. Other industries such as manufacturing and retail are also utilizing AI capabilities to optimize supply chains or to offer personalized shopping experiences and customized recommendations. In addition, governments across the globe are focusing on building smart cities and utilizing capabilities such as facial recognition for use in law enforcement.

Machine Learning and Deep Learning

While AI comprises all techniques that make machines perform tasks that require intelligence, Machine Learning specifically imitates how humans learn. Basically, Machine Learning is a subset of AI (Figure 2) and consists of the techniques that enable machines to learn from the data without being explicitly programmed to do so. Conversely, other AI techniques could be
classified as rules-based or expert systems, which work on a pre-defined algorithm or logic like performing accountancy tasks, in which the system runs the information through a set of
static rules.

Though Machine Learning has evolved a lot over the years and is used to tackle many problems, for a long time it was still difficult for machines to perform many tasks such as speech, handwriting and image recognition, and more mundane tasks such as counting the number of items in a picture. The concept of Artificial Neural Networks (ANN) kickstarted the development of Deep Learning, which provides machines the capability to perform tasks such as image recognition, sound recognition and recommender systems with much greater accuracy and speed.

Deep Learning itself is essentially a subset of Machine Learning and is all about using neural networks comprising artificial neurons, neuron layers and interconnectivity. Instead of organizing data to run through predefined equations, Deep Learning sets up basic parameters around the data and trains the computer to learn on its own by recognizing patterns using many layers of computer processing.

Artificial Neural Networks learn like the human brain does

Artificial Neural Networks (ANNs) are computing systems with a large number of interconnected nodes that work almost like neurons in the human brain. They use algorithms to recognize hidden patterns and correlations in raw data and then cluster and classify that data to solve specific problems. Over time, neural networks continuously learn from new data and apply those learnings to make future decisions.

A simple neural network includes an input layer, an output (or target) layer, and a hidden layer in between. The artificial neurons (or nodes) in these layers are interconnected and form a network termed as a neural network of interconnected nodes (Figure 3). As the number of hidden layers within a neural network increases, deep neural networks are formed. A simple ANN might contain two or three hidden layers, while deep neural networks can contain as many as 100 hidden layers.

In a typical neural network, a node is patterned after a neuron in a human brain. These nodes get activated when there are sufficient stimuli or inputs (just like neurons in a human brain). This
activation spreads throughout the network, creating a response to the stimuli (output). The connections between these artificial neurons act as simple synapses, enabling signals to be transmitted from one to another. Signals across layers travel from the first, input, layer to the last, output, layer and get processed along the way.

While solving a problem or addressing a request, data such as text, images, audio and video, is fed into the network via the input layer, which communicates to one or more hidden layers. Each
neuron receives inputs from the neuron to its left, and the inputs get multiplied by the weights of the connections they travel along. These input-weight are then summed up. If the sum is higher than a certain threshold value, the neuron fires and triggers the neurons it is connected to on the right. In this way, the sum of the input-weight product determines the extent to which a signal must progress further through the network to affect the final output. In the next chapter we will discuss this process in more detail.

Many types of neural networks

Over the past several years, many neural networks with different architectures and specifications have emerged. Feedforward Neural Networks (FNNs) are the simplest form of
ANNs. For specific tasks, more complex ANNs have been invented, including the Convolutional Neural Networks (CNNs), which aim to mimic the human visual system, as well as the Recurrent Neural Networks (RNNs), which are used to interpret sequential data such as text and video.

These major types of ANNs are described in Figure 4.

Supervised Learning versus Unsupervised Learning

Since the advent of Machine Learning, different algorithms or methods have been developed to process both structured and unstructured data. However, all Machine Learning methods can be broadly classified into either supervised learning or unsupervised learning (Figure 5), though supervised learning is the most commonly used form of Machine Learning.

With supervised learning, each input fed to the system is labeled with a desired output value. A supervised learning algorithm analyzes the data and compares its actual output with desired output to find errors and modify the model accordingly. Supervised learning is commonly used in applications where future events are predicted based on historical data, e.g. determining fraudulent credit card transactions and predicting insurance customers likely to file claims.

In unsupervised learning, the training set submitted as input to the system is not labeled with the historical data or a desired outcome. In simple words, unsupervised learning is used against
data that has no historical labels. Therefore, the system itself develops and structures the data, identifies common characteristics, and modifies it based on knowledge gained during the process.

This form of Machine Learning is commonly used to segment customers with similar attributes who can then be treated similarly in marketing campaigns. It can also identify the main attributes
that separate customer segments from each other. Other applications include segmentation of text topics, image recognition, pattern recognition in financial markets data, identification of data
outliers, sound analysis, e.g. to detect anomalies and potential problems in jet engines etc.

Convolutional Neural Networks are widely used today

CNNs are among the most widely used ANNs today given that they can learn unsupervised and require relatively little pre-processing. CNNs are used in a range of areas, including statistics, natural language processing as well as in signal and image processing, e.g. for medical image analysis.

However, CNNs are rather impractical for visual imagery classification given the large data sets that need to be processed, which consumes enormous amounts of energy while CNNs are relatively slow. With the advent of autonomous vehicles and the stringent requirements on image recognition capabilities by Advanced Driver Assistance Systems (ADAS) in cars, today’s
CNNs may not be the best solution.

Pros and Cons of Machine Learning and Deep Learning

In summary, Machine Learning and Deep Learning have many applications, and organizations use these applications to drive automation for specific tasks and processes, e.g. to save cost,
bring products to market faster, improve operational efficiencies, prevent fraud, gain new insights into data and enable new technologies to be deployed faster. Home Land Security (HLS) and law enforcement are other application areas for AI.

While Machine Learning supplements data mining, assists decision making and enables development of autonomous computers and software programs, Deep Learning, on the other
hand, performs complex computations and is widely used for difficult problems that require realtime analysis, such as speech and object recognition, language translation and fraud detection.

However, these AI technologies do have their own limitations. Both Machine Learning and Deep Learning are susceptible to errors and whenever they make errors, diagnosing and correcting
them can be difficult. In addition, it is impossible to make immediate accurate predictions with these technologies as they require substantial computational power and can be difficult to deploy, especially in real time.

Furthermore, the outcomes generated by these technologies are prone to hidden and unintentional biases, including racial biases, depending on the data provided to train them. Also,
these technologies cannot always provide rational reasons for a prediction or decision. Nevertheless, the utilization of Machine Learning and Deep Learning is anticipated to rise substantially as the potential of neural networks to solve problems, make predictions and improve decision-making are unparalleled.

Conclusion

In November 2018, BRN indicated that it was involved in 21 active or committed pilot projects for Akida and BCS combined, in addition to 17 design wins, 55 qualified sales opportunities and
more than 500 leads.

Upon successful completion of transitioning the Akida IP from lab to fab, expected in the second half of 2019, we believe BRN should be in a position to secure a variety of different customer types for Akida, including cell phone manufacturers, semiconductor foundries, Automotive OEMs for ADAS and Autonomous Vehicles, third-party semiconductor IP providers, IDMs (Integrated Device Manufacturers) and companies in the Imaging space.

Given that BRN will be commercializing Akida through a manufacturing partnership with Socionext and an asset-light licensing model with recurring royalty revenues, we anticipate high gross margins once Akida sales ramp up. Additionally, in the near to medium term we expect BrainChip Studio to increase its traction through direct sales and through BRN’s channel
partnerships, such as with GPI.

Valuation range of A$ 0.40-0.45 if Akida NSoC shows commercial viability

Looking at where Akida currently is in its development/commercialization process, we are of the opinion that BRN’s share price does not accurately reflect the company’s commercial potential described above. In our view, the company’s valuation could move towards levels seen for comparable companies, such as Nervana and Movidius that were acquired by Intel Corp (NASDAQ:INTC), i.e. equivalent to A$ 0.43 per fully diluted BRN share. Similarly, we believe AudioPixels’ (ASX:AKP) A$ 560M valuation, equivalent to A$ 0.44 per fully diluted BRN share, also provides a gauge to BRN’s potential valuation, given where AKP is positioned in its development process.

In the near to medium term, though, we believe BRN will first need to demonstrate commercial viability of the Akida NSoC by signing one or more commercial agreements that involve integration of Akida and/or components of the Akida IP stack into customers’ designs.

Given the strong upside we can potentially see for the BRN’s share price if Akida is commercially validated, we start our coverage with a Speculative Buy recommendation.

Near-term share price catalysts

• Conversion of current discussion partners into paying customers and development partners, e.g. to develop and deploy Akida for specific applications such as for cell phones and ADAS.
• Updates on the Akida commercialization roadmap, specifically around first silicon test results.
• Additional customers for BrainChip Studio, especially in the Gaming and Law Enforcement sectors.
• One of the Top-20 shareholders, Metals X Limited, seems to have large sold its position on market recently. The selling pressure from the sale of these 11.9M shares may have
exacerbated the downward pressure on BRN’s shares. With selling pressure from Metals X now gone, we may see a bounce in BRN in the short term.

SWOT Analysis

Strengths

• The Akida IP is unique in that it combines fast and low power Neuromorphic Processing with a targeted market price point that is expected to be well below similar technologies.
• The asset-light IP licensing model should make for very high gross margins once sales of Akida IP ramp up.
• BRN is already working with a large number of prospective Akida customers on the testing and integration of Akida NSoCs..

Weaknesses

• The Akida IP has yet to be transferred into an actual chip. This transfer process may encounter setbacks, which would push out the timeframe for delivery of first sample chips, currently planned for the second half of 2019.

• BRN will be competing with larger industry players that may be able to more easily fund IP development and run pilot projects with customers, which may inhibit BRN’s Akida commercialization process.

• Given the average quarterly cash burn of US$ 1.7M in FY18, with US$ 10M in cash on the balance sheet per the end of the September quarter, BRN may need to raise additional capital to fund the company until it reaches cash flow break even, diluting current shareholders.

• Capital restrictions may limit BRN in developing IP, in addition to what the company is already working on, potentially inhibiting future growth from new products.

Opportunities

• The market for Deep Learning chipsets is potentially very large with Tractica estimating a market size of US$ 66BN by 2025, compared to US$ 5BN today. This implies a CAGR of nearly 45%.

• Additionally, most of the Deep Learning chips sold today are not dedicated Neuromorphic chips like Akida but general-purpose chips, such as GPU’s, tailored to perform certain tasks such as image recognition. Hence, we see a very large displacement potential in the Deep Learning chipset market for Akida.

Threats

• Global tensions around ownership and theft of semiconductor IP between the United States and China may result in Western semiconductor companies, including BRN, being restricted in the IP they are allowed to sell to Chinese companies. This would potentially restrict BRN’s growth in one of the world’s largest markets for semiconductors.

• In generating revenues from its Neuromorphic System-on-a-Chip, BRN will be competing with some very large players, such as Intel that has acquired a number of companies similar to BRN in the last few years.

Please note, the usual disclaimers apply – click here

Pitt Street Research work is commissioned by the listed companies it covers, and Pitt Street Research has received or will receive payment for the preparation of such work. Please refer to the bottom of the research notes as published on Pitt Street Research’s web site for risks related to the companies being covered, as well our General Advice Warning, disclaimer and full disclosures. Also, please be aware that the investment opinion in this report is current as at the date of publication but that the circumstances of the company may change over time, which may in turn affect our investment opinion.



Categories: Australian Stocks, Featured

Tags: , , , , ,

1 reply

Trackbacks

  1. BrainChip Holdings (ASX: BRN) – Taking Deep Learning to Higher Levels – Investments Revolution – IAM Network
%d bloggers like this: