Streaming data from different kinds of sensors contributes to Big Data in a significant way. Recognizing the norms and abnormalities in such spatiotemporal data is a challenging problem. We present a general-purpose biologically-plausible computational model, called SELP, for learning the norms or invariances as features in an unsupervised and online manner from explanations of saliencies or surprises in the data. Given streaming data, this model runs a relentless cycle of Surprise → Explain → Learn → Predict involving the real external world and its internal model, and hence the name. The key characteristic of the model is its efficiency, crucial for streaming Big Data applications, which stems from two functionalities exploited at each sa...
(a) Inference on an example input image sequence of 10 frames. Top to bottom: Input sequence; model’...
Abstract—Emerging stream mining applications require clas-sification of large data streams generated...
Chunking is the process by which frequently repeated segments of temporal inputs are concatenated in...
Streaming data from different kinds of sensors contributes to Big Data in a significant way. Recogni...
Streaming sensorial data poses major computational challenges, such as, lack of storage, inapplicabi...
Sensors that monitor around the clock are everywhere. Due to the sheer amount of data these sensors ...
Finding useful representations of data in order to facilitate scientific knowledge generation is a u...
In the era of big data, considerable research focus is being put on designing efficient algorithms c...
Streamed data are a potentially infinite sequence of incoming data at every high speed and may evolv...
In the statistics and machine learning communities, there exists a perceived dichotomy be- tween sta...
The ventral visual processing hierarchy of the cortex needs to fulfill at least two key functions: p...
Chunking is the process by which frequently repeated segments of temporal inputs are concatenated in...
The problem of forecasting streaming datasets, particularly the financial time series, has been larg...
This dissertation discusses how predictive models are being used for scientific inquiry. Statistical...
The encoding of sensory information in the human brain is thought to be optimised by two principal p...
(a) Inference on an example input image sequence of 10 frames. Top to bottom: Input sequence; model’...
Abstract—Emerging stream mining applications require clas-sification of large data streams generated...
Chunking is the process by which frequently repeated segments of temporal inputs are concatenated in...
Streaming data from different kinds of sensors contributes to Big Data in a significant way. Recogni...
Streaming sensorial data poses major computational challenges, such as, lack of storage, inapplicabi...
Sensors that monitor around the clock are everywhere. Due to the sheer amount of data these sensors ...
Finding useful representations of data in order to facilitate scientific knowledge generation is a u...
In the era of big data, considerable research focus is being put on designing efficient algorithms c...
Streamed data are a potentially infinite sequence of incoming data at every high speed and may evolv...
In the statistics and machine learning communities, there exists a perceived dichotomy be- tween sta...
The ventral visual processing hierarchy of the cortex needs to fulfill at least two key functions: p...
Chunking is the process by which frequently repeated segments of temporal inputs are concatenated in...
The problem of forecasting streaming datasets, particularly the financial time series, has been larg...
This dissertation discusses how predictive models are being used for scientific inquiry. Statistical...
The encoding of sensory information in the human brain is thought to be optimised by two principal p...
(a) Inference on an example input image sequence of 10 frames. Top to bottom: Input sequence; model’...
Abstract—Emerging stream mining applications require clas-sification of large data streams generated...
Chunking is the process by which frequently repeated segments of temporal inputs are concatenated in...