250x250
반응형
Notice
Recent Posts
Recent Comments
Link
일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | 6 | 7 |
8 | 9 | 10 | 11 | 12 | 13 | 14 |
15 | 16 | 17 | 18 | 19 | 20 | 21 |
22 | 23 | 24 | 25 | 26 | 27 | 28 |
29 | 30 | 31 |
Tags
- UDF
- API Gateway
- session 유지
- hadoop
- tensorflow text
- requests
- 상관관계
- TensorFlow
- spark udf
- Counterfactual Explanations
- flask
- grad-cam
- 공분산
- login crawling
- GCP
- 유튜브 API
- API
- gather_nd
- chatGPT
- subdag
- integrated gradient
- youtube data
- XAI
- BigQuery
- GenericGBQException
- Airflow
- airflow subdag
- top_k
- Retry
- correlation
Archives
- Today
- Total
데이터과학 삼학년
Hierarchical temporal memory (HTM networks) 본문
반응형
HTM networks에 대해 논한다.
hidden layer 안의 neuron 끼리도 정보를 공유하는 networks..
일반 nn 보다 더 많은 edge를 갖는 networks로 사람의 복잡한 뇌처럼 구조화하려 한 모델이다.
딥러닝과 큰 차이점은 back propagation 방식을 쓰지 않고 feed foward 방식으로 학습을 한다는 것이다.
online learning이 가능하고, sequential 모델로 볼 수 있어 시계열 분석을 이용한 이상탐지 모델에 주로 사용되고 있는 추세이다.
출처 : https://github.com/llSourcell/numenta_explained
Online learning with Numenta¶
What is Numenta?¶
- Jeff Hawkin's published On Intelligence in 2004.
- It was a good layman pop neurosci book for the time, and indirectly led people to deep learning
- Andrew Ng for one mentioned it as a strong influence
How is their HTM system different from deep learning?¶
- HTM uses online unsupervised data (temporal streams), so can't compare to CIFAR results
- LSTM is the closest comparison candidate, and it cannot do online learning without scaling very poorly (but this is crucial for AGI)
- Lets keep thinking of entirely new architectures. DeepMind'sDQN + moar processing != AGI (backrop isn't the end!)
- HTM is directly based on neocortex, DL is inspired by neuroscience
- DL systems are very good at doing low-level perceptual tasks, but require huge amounts of examples to learn anything well.
- HTM can learn complex temporal structure with several orders of magnitude fewer examples (dozens rather than millions) and can also be easily configured to do reliable one-shot learning
- HTM networks are not like ConvNets at all. Every aspect is different. Each neuron is more like a network in its own right (using active dendrites), and a layer of neurons uses local k-winner-takes all (inhibition) and not max pooling.
- In addition, there is complex structure in using columnar inhibition to represent arbitrary-order sequence memory and to encode prediction errors.
- HTM is only really like a convolutional neural network in that it is a tree-like architecture with locally connected (sparse) weights.
- Unlike convnets, it doesn't use shared weights. It also serves a different purpose. It acts like a really large predictive autoencoder that predicts its own input one step ahead of time. - From there you can do reinforcement learning and other interesting things. HTM's spatial pooling isn't really like max pooling, it is almost exactly like k-sparse autoencoders though.
- HTM is not feed-forward. Information goes both up the hierarchy, to form abstractions and extract features, as well as back down to form prediction
TL;DR: HTM is like a really beefed up autoencoder.¶
The main thing that interests me in HTM is its online nature. I don't know of any other algorithm that can learn in real-time from data streams without using some form of experience replay. HTM can, and it does so quite well (you can learn a sheet of music, for instance, after as little as 26 iterations).¶
Apps that use HTM?¶
Hierarchial Temporal Memory explained¶
The Neocortex¶
-reptiles dont have one -mammals do -size of the dinner napkin -older parts of the brain involved in basic functions of life -It makes you "you"
-Regions linked together in hierarchy -Ideas become more abstract and permanent up the chain
- HTM prinicple - Common algorithm everywhere
- HTM principle - Sequential memory
- HTM princople - Online learning
- I/O of neocortex (sensory input and motor commands)
- sensory encoder- How to turn input values into SDRs
- GPS encoders have no biological counterpart
Sparse Distributed Representations¶
- Using 1s and 0s to represents neurons on and off (basic principle of HTM)
- The data structure of the brain
- Everything is SDRs. used for every aspect of cognitive function
- Neurons receive them from other neurons and everywhere
- They can overlap i.e sets and unions (combining sdr's')
Encoders¶
- like sensory organs like retina or chochlea
- outermost system of HTM system
- principle 1- Similar data should be highly overlapping
- 2- same input should create same output (determinsitic)
- 3 output should have same dimensionality as input
- 4 output should have similar sparsity as input
Spatial Pooling¶
- Accepts input vector and outputs a vector
- Important when talking about sequence memory
- Maintaining fixed sparsity is a goal
- Maintaing overlap properties of inputs for outputs
Temporal Memory¶
- Learns sequences
- Predicts outcomes
Towards AGI¶
- Weak AI wont produce intelligence
- We need to incorporate movement i.e interacting with environment
- Better neurons like the pyramidal neuron which has Layers and columns
- so the Columns contain layers and layers contain neurons
- It has both an ACtivate and inactive state. And predictive state
- feedforward, lateral input, and apical (higher level)
is HTM similar to Hinton's Capsule Network?¶
Well for both systems
- objects are defined by the relative locations of features
- a voting process figures out the most consistent interpretation of sensory data,
But the big difference is
- HTM models movement. HTM explicitly models how information changes as we move our sensors (e.g. as we move our eyes around), and how to integrate information to quickly recognize objects.
728x90
반응형
LIST
'Machine Learning' 카테고리의 다른 글
Anomaly Detection 종류(Point, Contextual, Collective) (0) | 2020.12.01 |
---|---|
tf.keras serving function을 이용한 feature transform 적용 방법 (0) | 2020.11.25 |
AutoML for Text Classification (0) | 2020.08.10 |
GANs 간략한 소개 (0) | 2020.08.10 |
CNN for sequence models (0) | 2020.08.10 |
Comments