► Computer Science 101 - My Way
A brief tour of stuff from computer science related to the brain, mind and cognition.
Forward
I want to point out that imagination has to be used alongside scanning to make leaps of understanding how the brain and mind process patterns in nature. As can be inferred from what we have already covered, these days there is an intimate connection between people and technology when it comes to understanding the brain and mind. Following are brief introductions into Pattern Matching, Pattern Recognition, Statistical Inference, Computer Vision, Speech Recognition(Hearing) and Hierarchical Temporal Memory. What we will do with these introductions is to mash them together into an insightful theory of how they are related to the brains ability to endow the mind with the capacity to recognize patterns by virtue of analogy and vicinity.
Pattern Matching
In computer science, pattern matching is the act of checking a given sequence of tokens for the presence of the constituents of some pattern. In contrast to pattern recognition, the match usually has to be exact. The patterns generally have the form of either sequences or tree structures. Uses of pattern matching include outputting the locations (if any) of a pattern within a token sequence, to output some component of the matched pattern, and to substitute the matching pattern with some other token sequence (i.e., search and replace).
Pattern Recognition
Pattern recognition is a branch of machine learning that focuses on the recognition of patterns and regularities in data, although it is in some cases considered to be nearly synonymous with machine learning. Pattern recognition systems are in many cases trained from labeled “training” data (supervised learning), but when no labeled data are available other algorithms can be used to discover previously unknown patterns (unsupervised learning).
Statistical Inference
Statistical inference is the process of deducing properties of an underlying probability distribution by analysis of data. Inferential statistical analysis infers properties about a population: this includes testing hypotheses and deriving estimates. The population is assumed to be larger than the observed data set; in other words, the observed data is assumed to be sampled from a larger population.
Vision
Computer vision is an interdisciplinary field that deals with how computers can be made for gaining high-level understanding from digital images or videos. From the perspective of engineering, it seeks to automate tasks that the human visual system can do. Computer vision tasks include methods for acquiring, processing, analyzing and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical or symbolic information, e.g., in the forms of decisions. Understanding in this context means the transformation of visual images (the input of the retina) into descriptions of the world that can interface with other thought processes and elicit appropriate action. This image understanding can be seen as the disentangling of symbolic information from image data using models constructed with the aid of geometry, physics, statistics, and learning theory.
Hearing
Speech recognition is the inter-disciplinary sub-field of computational linguistics that develops methodologies and technologies that enables the recognition and translation of spoken language into text by computers. It is also known as “automatic speech recognition” (ASR), “computer speech recognition”, or just “speech to text” (STT). It incorporates knowledge and research in the linguistics, computer science, and electrical engineering fields. Some speech recognition systems require “training” (also called “enrollment”) where an individual speaker reads text or isolated vocabulary into the system. The system analyzes the person’s specific voice and uses it to fine-tune the recognition of that person’s speech, resulting in increased accuracy. Systems that do not use training are called “speaker independent” systems. Systems that use training are called “speaker dependent”.
Hierarchical Temporal Memory
Hierarchical temporal memory (HTM) is a biologically constrained theory of machine intelligence originally described in the 2004 book On Intelligence by Jeff Hawkins with Sandra Blakeslee. HTM is based on neuroscience and the physiology and interaction of pyramidal neurons in the neocortex of the human brain. At the core of HTM are learning algorithms that can store, learn, infer and recall high-order sequences. Unlike most other machine learning methods, HTM learns time-based patterns in unlabeled data on a continuous basis. HTM is robust to noise and high capacity, meaning that it can learn multiple patterns simultaneously. When applied to computers, HTM is well suited for prediction, anomaly detection, classification and ultimately sensorimotor applications
Keep in mind that all inputs are encoded for the machine and all human inputs are also encoded.
It is this encoding that shows me the mind is indeed much different to the brain . . .
. . . and the minds thoughts must be translated many times for the brain . . .
. . . to process - but neither the mind or brain understand each other . . .
Pattern recognition in the brain is less tolerant than the minds abilities - much like hardware versus software.
Bringing it all together - this is what we are about to do . . .