Prevalent techniques tend to rely on analyzing historical data, while many applications of big data need predictive analytics to access current information, and to provide insights or modify models in (near) real time. This necessitates new PA techniques that are in some sense “self-correcting” or “self-learning”, such as Bayesian methods or machine learning algorithms.
Event Engine: Components which process data in-flight to identify actionable events and then determine nextbest-action based on decision context and event profile data and persist in a durable storage system. Data Reservoir: Economical, scale-out storage and parallel processing for data which does not have stringent requirements for formalisation or modelling. Typically manifested as a Hadoop cluster or staging area in a relational database
Data Sources. These represent all potential sources of raw data which are required by the business to address its information requirements. Sources include both internal and external system. Data Ingestion and Information Interpretation. These are the methods and processes required for ingestion and interpretation of information to and from each of the data layers in our architecture.
Most of all, it is important to approach this type of analytics from a strategic perspective, rather than starting with the technology or the data. The strategy should be developed to suit the enterprise as a whole, to avoid conflict between, or duplication of effort by, sales, marketing, and so on.
New types of analytics are becoming possible as a direct result of the additional types of data now available. These analytics can help a company to sell more products and provide a better service and overall experience for customers.