事象のランダム (random) な変化と、変化に対する特異的選択 (selection) 指向性による、事象の高度化・複雑化・精緻化の普遍性について ーPart 3

"Randomness" is the key factor for the evolution of various phenomena.
Directions of the evolution are determined by some specific windows of selections for random changes.

わたしのnoteにおいては、最新の科学・経済・社会等の問題に関して、英語の記事を引用し、その英文が読み易いように加工し、「英語の勉強ツール」と「最新情報収集ツール」としてご利用頂くことをmain missionとさせて頂きます。勿論、私論を書かせて頂くこともしばしです。

今回は、様々な事象のランダムな変化と、それらの変化に対する特異的な選択指向性が、事象の変化の方向性を決め、結果として、現在の在り様を形成してきたのではないかという、かなり普遍的な仮説を支持する様な、いくつかの記事をシリーズで紹介していきたいと思います。

Research team makes considerable advance in brain-inspired computing

Introduces a more efficient and sustainable hardware device for AI and ML applications

November 21, 2021
Story Source : Materials provided by University of Southern California.
Original written by Amy Blumenthal.

<Private interpretation>
In order to solve combinatorial optimization problems, randomness is introduced to computation and random variables are selected by dynamically tuning of the randomness features.

コンピューターの世界においても、randomnessが重要だというのは、コンピューターの正確性という個人的イメージとは異なり、面白いと感じました。このrandomnessに対して、どの様なselectionをかけるかで、様々な問題解決ができるのだと思います。より良い解決策を導き出すには、randomnessの数量・大きさ・幅・拡がり・対象 (表現が不適切 ?) が重要ですし、selectionのかけ方次第で、不適切な解決が導かれることがあるのかもしれません。

neuromorphic computing or brain-inspired computingを考える際に、人間の思考と同様に、randomnessが重要だと書かれていますが、人間の思考において、randomnessから思考の帰結に至るまでのselectionというのは、どの様なものなのか非常に興味を感じました。
人類の思想・思考の歴史は、単なる偶然の所産 (randomness→selectionの繰り返し) なのかもしれません。


Summary :

A lab, [whose work is concentrated on neuromorphic (神経形態をもつ) computing or brain-inspired computing], has new research that introduces hardware improvements by harnessing (役立てる、生かす、用いる、利用する/hɑ́rnəs) a quality known as 'randomness' or 'stochasticity (確率論)'. Their research contradicts the perception of randomness <as a quality that will negatively impact computation results> and demonstrates the utilization of finely controlled stochastic features in semiconductor devices to improve performing optimization. This has potential to create a more sophisticated building block for creating computers that can tackle sophisticated optimization problems, which can potentially be more efficient. What's more they can consume less power.

Randomness :

From Wikipedia, the free encyclopedia

In common parlance (専門用語/pɑ́rləns), randomness is the apparent or actual lack of pattern or predictability in events. A random sequence [of events, symbols or steps] often has no order and does not follow an intelligible pattern or combination. Individual random events are, by definition, unpredictable, but if the probability distribution <確率分布:確率変数に対して、各々の値をとる確率を表したもの:A probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment or the mathematical description of a random phenomenon in terms of its sample space and the probabilities of events [subsets of the sample space.]> is known, the frequency of different outcomes over repeated events (or "trials") is predictable. Randomness applies to concepts of chance, probability, and information entropy. The fields of mathematics, probability, and statistics use formal definitions of randomness. In statistics, a random variable 
<確率変数:ある試行によって得られるすべての結果を指す変数であり、実際に試行、観測を行うまで何の結果が得られるか分からないもの:A Random Variable is a set of possible values from a random experiment. The set of possible values is called the Sample Space.>
is an assignment of a numerical value to each possible outcome of an event space. This association facilitates the identification and the calculation of probabilities of the events. Random variables can appear in random sequences. A random process is a sequence of random variables whose outcomes do not follow a deterministic pattern, but follow an evolution described by probability distributions. These and other constructs are extremely useful in probability theory and the various applications of randomness.

FULL STORY :

While AI is often perceived by the public to be affiliated with software, researchers [in Han Wang's Emerging Nanoscale Materials and Device Lab at USC Ming Hsieh Department of Electrical and Computer Engineering and the Mork Family Department of Chemical], focus on improving AI and machine learning performance through hardware. The lab, [whose work is concentrated on neuromorphic computing or brain-inspired computing], has new research that introduces hardware improvements by harnessing a quality known as "randomness" or "stochasticity." Their research now published in Nature Communications, contradicts the perception of randomness as a quality that will negatively impact computation results and demonstrates the utilization of finely controlled stochastic features in semiconductor devices to improve performing optimization.

In the brain, randomness plays an important role in human thought or computation. It is born from billions of neurons that spike in response to input stimuli and generate a lot of signals that may or may not be relevant. The decision-making process perhaps is the best-studied example of how our brain makes use of randomness. It allows the brain to take a detour (迂回路、回り道/díːtuər) from past experiences and explore a new solution when making a decision, especially to a challenging and unpredictable situation.

"Neurons exhibit stochastic (確率論的な、確率変数の/stɔkǽstik) behavior, which can help certain computational functions" said a USC PhD student Jiahui Ma and a lead author Xiaodong Yan (both equally contributed as first authors). The team wanted to emulate (まねる、模倣する、見習う/émjəlèit) neurons as much as possible and designed a circuit to solve combinatorial optimization problems, which are one of the most important tasks for computers to complete.

The thinking is that for computers to do this efficiently, they need to behave more like the human brain (on super steroids : in a bigger or more intense form than normal) in terms of how they process stimuli and information, as well as make decisions.

In much simpler terms, we need computers to converge (収束する/kənvə́ːrdʒ) on the best solution among all possibilities. Says the researchers, "The randomness introduced in the new device demonstrated that in this work it can prevent it from getting stuck (行き詰まる、立ち往生する、動けなくなる、お手上げ状態になる) at a not-so-viable (実行可能な/váiəbl) solution, and instead continue to search until it finds a close-to-optimal result." This is particularly important for optimization problems, says corresponding author Professor Wang, "If one can dynamically tune (調整する/túːn) the randomness features, the machine for performing optimization can work more efficiently as we desire."

<Private interpretation>
In order to solve combinatorial optimization problems, randomness is introduced to computation and random variables are selected by dynamically tuning of the randomness features.

The researchers achieve this dynamic "tuning" by creating a specialized device, a hetero-memristor. Unlike transistors which are logic switches inside a regular computer chip, the hetero-memristor combines memory and computation together. Memristors have been developed prior, normally with two-terminal structure. The Viterbi team's innovation is in adding a third electrical terminal and modulating its voltage to activate the neuron-like device and to dynamically tune the stochastic features in its output, much like one heats up a pot of water and dynamically adjusts the temperature to control the activity of the water molecules, hence enabling the so-called simulated "cooling." This provides a level of control that earlier memristors do not have.

The researchers say, "This method emulates (まねる、模倣する/émjəlèit) the stochastic properties of neuron activity." In fact, neuron activity is perceived to be random, but may follow a certain probability pattern. The hetero-memristors [they developed] introduce such [probability-governed randomness] into a neuromorphic computing circuit by the reconfigurable (再構成可能な) tuning of the device's intrinsic stochastic property.

This is thus a more sophisticated building block for creating computers that can tackle sophisticated optimization problems, which can potentially be more efficient. What's more they can consume less power.

The full research team includes Xiaodong Yan, Jiahui, Ma Tong Wu, Aoyang Zhang, Jiangbin Wu, Matthew Chin, Zhihan Zhang, Madan Dubey, Wei Wu, Mike Shuo-Wei Chen, Jing Guo, & Han Wang.

Research was done in collaboration with the Army Research Laboratory, the University of Florida and Georgia Tech.

Journal Reference

  1. Xiaodong Yan, Jiahui Ma, Tong Wu, Aoyang Zhang, Jiangbin Wu, Matthew Chin, Zhihan Zhang, Madan Dubey, Wei Wu, Mike Shuo-Wei Chen, Jing Guo, Han Wang. Reconfigurable Stochastic neurons based on tin oxide/MoS2 hetero-memristors for simulated annealing and the Boltzmann machine. Nature Communications, 2021; 12 (1) DOI: 10.1038/s41467-021-26012-5

Reconfigurable Stochastic neurons based on tin oxide/MoS2 hetero-memristors for simulated annealing and the Boltzmann machine

Nature Communications / volume 12 / Article number: 5710 (2021) / Published: 29 September 2021

Abstract

Neuromorphic hardware implementation of Boltzmann Machine
<A Boltzmann machine is a type of stochastic recurrent neural network. It was translated from statistical physics for use in cognitive science.>
using a network of stochastic
確率論的な、確率変数の/stɔkǽstik:"stochastic" refers to the property of being well described by a random probability distribution <確率分布:確率変数に対して、各々の値をとる確率を表したもの:A probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment / or the mathematical description of a random phenomenon in terms of its "sample space" and the probabilities of events [subsets of the "sample space"=the set of possible values is called the "sample space"]>. Although "stochasticity" and "randomness" are distinct in that the former refers to a modeling approach and the latter refers to phenomena themselves, these two terms are often used synonymously. Furthermore, in probability theory, the formal concept of a stochastic process is also referred to as a random process.
neurons can allow non-deterministic polynomial-time (NP) 
「NP : In computational complexity theory, NP / non-deterministic polynomial-time is a complexity class used to classify decision problems. NP is the set of decision problems for which the problem instances, where the answer is "yes", have proofs verifiable in polynomial (多項式の/pɑ̀linóumiəl) time by a deterministic Turing machine <A Turing machine is a theoretical device that manipulates symbols on a strip of tape according to a table of rules. Despite its simplicity, a Turing machine can be adapted to simulate the logic of any computer algorithm, and is particularly useful in explaining the functions of a CPU inside a computer>
hard combinatorial optimization problems to be efficiently solved.
Efficient implementation of such Boltzmann Machine
<A Boltzmann machine is a type of stochastic recurrent neural network. It was translated from statistical physics for use in cognitive science.>
with [simulated annealing (強固にする、堅固にする/əníːl)] desires [the statistical parameters of the stochastic neurons to be dynamically tunable], however, there has been limited research on stochastic semiconductor devices with controllable statistical distributions. Here, we demonstrate a reconfigurable tin oxide (SnOx)/molybdenum disulfide (MoS2) heterogeneous memristive device that can realize tunable stochastic dynamics in its output sampling characteristics. The device can sample exponential-class sigmoidal distributions analogous to the Fermi-Dirac distribution of physical systems with quantitatively defined tunable “temperature” effect. A BM composed of these tunable stochastic neuron devices, [which can enable simulated annealing with designed “cooling” strategies], is conducted to solve the MAX-SAT
<In computational complexity theory, the maximum satisfiability problem (MAX-SAT) is the problem of determining the maximum number of clauses, of a given Boolean formula in conjunctive normal form, that can be made true by an assignment of truth values to the variables of the formula. It is a generalization of the Boolean satisfiability problem [In logic and computer science, the Boolean satisfiability problem is the problem of determining if there exists an interpretation that satisfies a given Boolean formula. In other words, it asks whether the variables of a given Boolean formula can be consistently replaced by the values TRUE or FALSE in such a way that the formula evaluates to TRUE. If this is the case, the formula is called satisfiable. On the other hand, if no such assignment exists, the function expressed by the formula is FALSE for all possible variable assignments and the formula is unsatisfiable. For example, the formula "a AND NOT b" is satisfiable because one can find the values a = TRUE and b = FALSE, which make (a AND NOT b) = TRUE. In contrast, "a AND NOT a" is unsatisfiable.], which asks whether there exists a truth assignment that makes all clauses true.>,
a representative in NP-hard combinatorial optimization problems. Quantitative insights into the effect of different “cooling” strategies on improving the BM optimization process efficiency are also provided.

Introduction

Stochastic neuron devices are essential for the neural network implementation of key emerging non-von-Neumann computing concepts <Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient. It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form.> such as the Boltzmann machines
<A Boltzmann machine is a type of stochastic recurrent neural network. It was translated from statistical physics for use in cognitive science.>
which are recurrent artificial neural networks with stochastic features analogous to the thermodynamics of real-world physical systems. BM can be used to solve a broad range of combinatorial optimization problems with applications in classification, pattern recognition, feature learning, and other emerging computing systems. Deriving its name from the Boltzmann distribution of statistical mechanics, BM possesses an artificial notion of “temperature”, and the controlled evolution of this “temperature” parameter during the optimization process i.e., [the “cooling” strategy], can impact the convergence efficiency of the BM and its chance of reaching a better cost-energy minimization (or maximization depending on problem definition). To realize the hardware implementation of the BM that can also allow the “temperature” control and hence the precise execution of desired “cooling” strategy, it is essential to have electronic devices that can generate exponential-class stochastic sampling with dynamically tunable distribution parameters.

The property of memristor in its deterministic form has been commonly used in applications such as multiply-and-accumulate matrix calculation and resistor-logic demultiplexers. Its stochastic property is often intentionally suppressed in such applications for the purpose of achieving accurate and reproducible computational results. On the other hand, rich stochastic property of memristors, which relies on ensembles of random movements of atoms and ions, offers opportunities in energy-efficient computing applications. With the stochastic property, one can generate random number to encrypt information, implement physical unclonable functions, and realize artificial neurons with integrate-and-fire activations. Furthermore, emerging computing schemes can use stochastic memristive device as a building block to emulate biological neural network whose functions—[such as decision-making]—can leverage the stochastic dynamics of neurons and synapses. However, a common challenge with previous stochastic memristors is the lack of means to precisely control and modulate the probability distribution that is associated with its randomness. Realizing such devices has been difficult because many device-generated random features in stochastic memristors or oscillators lack stable probability distribution, which limits the chance of controlling it experimentally. Additionally, with only two terminals in a common memristor, where the probability distribution can only be influenced through the two-terminal bias, the probability distribution of the device output cannot be tuned flexibly and precisely.

In this work, we overcome such challenge with a three-terminal stochastic hetero-memristor based on tin oxide/MoS2 heterostructure, which demonstrates tunable statistical distributions enabled by the gate modulation. The inherent exponential-class stochastic characteristics of the device arising from the intrinsic randomness and energy distribution in its ionic motions are explored to realize sampling of exponential-class sigmoidal distributions that resembles the Fermi–Dirac distribution in physical systems. The device incorporates gate modulation that allows the efficient control of the stochastic features in the device output characteristics. The device enables the realization of reconfigurable stochastic neuron and the implementation of Boltzmann machine in which the reconfigurable statistic of the device allows different “cooling” strategies to be implemented during the optimization process. The effect of different “cooling” strategies on improving the optimization process efficiency of the BM is demonstrated experimentally.

いいなと思ったら応援しよう!